diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES
index 978be7ca3312e..01894721d1785 100644
--- a/OWNERS_ALIASES
+++ b/OWNERS_ALIASES
@@ -92,15 +92,21 @@ aliases:
- daminisatya
- mittalyashu
sig-docs-id-owners: # Admins for Indonesian content
+ - ariscahyadi
+ - danninov
- girikuncoro
+ - habibrosyad
- irvifa
+ - phanama
+ - wahyuoi
sig-docs-id-reviews: # PR reviews for Indonesian content
+ - ariscahyadi
+ - danninov
- girikuncoro
- habibrosyad
- irvifa
- - wahyuoi
- phanama
- - danninov
+ - wahyuoi
sig-docs-it-owners: # Admins for Italian content
- fabriziopandini
- Fale
diff --git a/README-pt.md b/README-pt.md
index 8367b0247c9c5..0992f6c045ce3 100644
--- a/README-pt.md
+++ b/README-pt.md
@@ -2,11 +2,11 @@
[](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
-Bem vindos! Este repositório abriga todos os recursos necessários para criar o [website e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!
+Bem-vindos! Este repositório contém todos os recursos necessários para criar o [website e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!
# Utilizando este repositório
-Você pode executar o website localmente utilizando o Hugo (versão Extended), ou você pode executa-ló em um container runtime. É altamente recomendável usar um container runtime, pois garante a consistência na implantação do website real.
+Você pode executar o website localmente utilizando o Hugo (versão Extended), ou você pode executa-ló em um container runtime. É altamente recomendável utilizar um container runtime, pois garante a consistência na implantação do website real.
## Pré-requisitos
@@ -40,7 +40,7 @@ make container-image
make container-serve
```
-Abra seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, Hugo atualiza o website e força a atualização do navegador.
+Abra seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força a atualização do navegador.
## Executando o website localmente utilizando o Hugo
@@ -56,10 +56,57 @@ make serve
Isso iniciará localmente o Hugo na porta 1313. Abra o seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força uma atualização no navegador.
+## Construindo a página de referência da API
+
+A página de referência da API localizada em `content/en/docs/reference/kubernetes-api` é construída a partir da especificação do Swagger utilizando https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs.
+
+Siga os passos abaixo para atualizar a página de referência para uma nova versão do Kubernetes:
+
+OBS: modifique o "v1.20" no exemplo a seguir pela versão a ser atualizada
+
+1. Obter o submódulo `kubernetes-resources-reference`:
+
+```
+git submodule update --init --recursive --depth 1
+```
+
+2. Criar a nova versão da API no submódulo e adicionar à especificação do Swagger:
+
+```
+mkdir api-ref-generator/gen-resourcesdocs/api/v1.20
+curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-generator/gen-resourcesdocs/api/v1.20/swagger.json
+```
+
+3. Copiar o sumário e os campos de configuração para a nova versão a partir da versão anterior:
+
+```
+mkdir api-ref-generator/gen-resourcesdocs/api/v1.20
+cp api-ref-generator/gen-resourcesdocs/api/v1.19/* api-ref-generator/gen-resourcesdocs/api/v1.20/
+```
+
+4. Ajustar os arquivos `toc.yaml` e `fields.yaml` para refletir as mudanças entre as duas versões.
+
+5. Em seguida, gerar as páginas:
+
+```
+make api-reference
+```
+
+Você pode validar o resultado localmente gerando e disponibilizando o site a partir da imagem do container:
+
+```
+make container-image
+make container-serve
+```
+
+Abra o seu navegador em http://localhost:1313/docs/reference/kubernetes-api/ para visualizar a página de referência da API.
+
+6. Quando todas as mudanças forem refletidas nos arquivos de configuração `toc.yaml` e `fields.yaml`, crie um pull request com a nova página de referência de API.
+
## Troubleshooting
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
-Por motivos técnicos, o Hugo é disponibilizado em dois conjuntos de binários. O website atual funciona apenas na versão **Hugo Extended**. Na [página de releases](https://github.com/gohugoio/hugo/releases) procure por arquivos com `extended` no nome. Para confirmar, execute `hugo version` e procure pela palavra `extended`.
+Por motivos técnicos, o Hugo é disponibilizado em dois conjuntos de binários. O website atual funciona apenas na versão **Hugo Extended**. Na [página de releases](https://github.com/gohugoio/hugo/releases) procure por arquivos com `extended` no nome. Para confirmar, execute `hugo version` e procure pela palavra `extended`.
### Troubleshooting macOS for too many open files
@@ -110,9 +157,9 @@ Você também pode entrar em contato com os mantenedores deste projeto em:
Você pode clicar no botão **Fork** na área superior direita da tela para criar uma cópia desse repositório na sua conta do GitHub. Esta cópia é chamada de *fork*. Faça as alterações desejadas no seu fork e, quando estiver pronto para enviar as alterações para nós, vá até o fork e crie um novo **pull request** para nos informar sobre isso.
-Depois que seu **pull request** for criado, um revisor do Kubernetes assumirá a responsabilidade de fornecer um feedback claro e objetivo. Como proprietário do pull request, **é sua responsabilidade modificar seu pull request para atender ao feedback que foi fornecido a você pelo revisor do Kubernetes.**
+Depois que seu **pull request** for criado, um revisor do Kubernetes assumirá a responsabilidade de fornecer um feedback claro e objetivo. Como proprietário do pull request, **é sua responsabilidade modificar seu pull request para atender ao feedback que foi fornecido a você pelo revisor do Kubernetes.**
-Observe também que você pode acabar tendo mais de um revisor do Kubernetes para fornecer seu feedback ou você pode acabar obtendo feedback de um outro revisor do Kubernetes diferente daquele originalmente designado para lhe fornecer o feedback.
+Observe também que você pode acabar tendo mais de um revisor do Kubernetes para fornecer seu feedback ou você pode acabar obtendo feedback de um outro revisor do Kubernetes diferente daquele originalmente designado para lhe fornecer o feedback.
Além disso, em alguns casos, um de seus revisores pode solicitar uma revisão técnica de um [revisor técnico do Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) quando necessário. Os revisores farão o melhor para fornecer feedbacks em tempo hábil, mas o tempo de resposta pode variar de acordo com as circunstâncias.
@@ -134,4 +181,4 @@ A participação na comunidade Kubernetes é regida pelo [Código de Conduta da
# Obrigado!
-O Kubernetes conta com a participação da comunidade e nós realmente agradecemos suas contribuições para o nosso website e nossa documentação!
+O Kubernetes prospera com a participação da comunidade e nós realmente agradecemos suas contribuições para o nosso website e nossa documentação!
\ No newline at end of file
diff --git a/README.md b/README.md
index 44dcc7a7ca679..8ec876eef283d 100644
--- a/README.md
+++ b/README.md
@@ -100,6 +100,8 @@ make container-image
make container-serve
```
+In a web browser, go to http://localhost:1313/docs/reference/kubernetes-api/ to view the API reference.
+
6. When all changes of the new contract are reflected into the configuration files `toc.yaml` and `fields.yaml`, create a Pull Request with the newly generated API reference pages.
## Troubleshooting
diff --git a/content/en/blog/_posts/2017-03-00-Advanced-Scheduling-In-Kubernetes.md b/content/en/blog/_posts/2017-03-00-Advanced-Scheduling-In-Kubernetes.md
index 5d7f0383c5e2f..722b1e59b0356 100644
--- a/content/en/blog/_posts/2017-03-00-Advanced-Scheduling-In-Kubernetes.md
+++ b/content/en/blog/_posts/2017-03-00-Advanced-Scheduling-In-Kubernetes.md
@@ -20,21 +20,14 @@ For example, if we want to require scheduling on a node that is in the us-centra
```
-affinity:
-
- nodeAffinity:
-
- requiredDuringSchedulingIgnoredDuringExecution:
-
- nodeSelectorTerms:
-
- - matchExpressions:
-
- - key: "failure-domain.beta.kubernetes.io/zone"
-
- operator: In
-
- values: ["us-central1-a"]
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: "failure-domain.beta.kubernetes.io/zone"
+ operator: In
+ values: ["us-central1-a"]
```
@@ -44,21 +37,14 @@ Preferred rules mean that if nodes match the rules, they will be chosen first, a
```
-affinity:
-
- nodeAffinity:
-
- preferredDuringSchedulingIgnoredDuringExecution:
-
- nodeSelectorTerms:
-
- - matchExpressions:
-
- - key: "failure-domain.beta.kubernetes.io/zone"
-
- operator: In
-
- values: ["us-central1-a"]
+ affinity:
+ nodeAffinity:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: "failure-domain.beta.kubernetes.io/zone"
+ operator: In
+ values: ["us-central1-a"]
```
@@ -67,21 +53,14 @@ Node anti-affinity can be achieved by using negative operators. So for instance
```
-affinity:
-
- nodeAffinity:
-
- requiredDuringSchedulingIgnoredDuringExecution:
-
- nodeSelectorTerms:
-
- - matchExpressions:
-
- - key: "failure-domain.beta.kubernetes.io/zone"
-
- operator: NotIn
-
- values: ["us-central1-a"]
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: "failure-domain.beta.kubernetes.io/zone"
+ operator: NotIn
+ values: ["us-central1-a"]
```
@@ -99,7 +78,7 @@ The kubectl command allows you to set taints on nodes, for example:
```
kubectl taint nodes node1 key=value:NoSchedule
- ```
+```
creates a taint that marks the node as unschedulable by any pods that do not have a toleration for taint with key key, value value, and effect NoSchedule. (The other taint effects are PreferNoSchedule, which is the preferred version of NoSchedule, and NoExecute, which means any pods that are running on the node when the taint is applied will be evicted unless they tolerate the taint.) The toleration you would add to a PodSpec to have the corresponding pod tolerate this taint would look like this
@@ -107,15 +86,11 @@ creates a taint that marks the node as unschedulable by any pods that do not hav
```
-tolerations:
-
-- key: "key"
-
- operator: "Equal"
-
- value: "value"
-
- effect: "NoSchedule"
+ tolerations:
+ - key: "key"
+ operator: "Equal"
+ value: "value"
+ effect: "NoSchedule"
```
@@ -138,21 +113,13 @@ Let’s look at an example. Say you have front-ends in service S1, and they comm
```
affinity:
-
podAffinity:
-
requiredDuringSchedulingIgnoredDuringExecution:
-
- labelSelector:
-
matchExpressions:
-
- key: service
-
operator: In
-
values: [“S1”]
-
topologyKey: failure-domain.beta.kubernetes.io/zone
```
@@ -172,25 +139,15 @@ Here we have a Pod where we specify the schedulerName field:
```
apiVersion: v1
-
kind: Pod
-
metadata:
-
name: nginx
-
labels:
-
app: nginx
-
spec:
-
schedulerName: my-scheduler
-
containers:
-
- name: nginx
-
image: nginx:1.10
```
diff --git a/content/en/blog/_posts/2019-03-15-Kubernetes-setup-using-Ansible-and-Vagrant.md b/content/en/blog/_posts/2019-03-15-Kubernetes-setup-using-Ansible-and-Vagrant.md
index 8b31d1df0b79b..247748b6f0059 100644
--- a/content/en/blog/_posts/2019-03-15-Kubernetes-setup-using-Ansible-and-Vagrant.md
+++ b/content/en/blog/_posts/2019-03-15-Kubernetes-setup-using-Ansible-and-Vagrant.md
@@ -66,6 +66,7 @@ Vagrant.configure("2") do |config|
end
end
end
+end
```
### Step 2: Create an Ansible playbook for Kubernetes master.
diff --git a/content/en/community/_index.html b/content/en/community/_index.html
index e1ebb9e9cb1ac..ad9cab5d945a3 100644
--- a/content/en/community/_index.html
+++ b/content/en/community/_index.html
@@ -19,6 +19,7 @@
diff --git a/content/en/community/static/community-values.md b/content/en/community/static/community-values.md
new file mode 100644
index 0000000000000..f6469a3e61ad2
--- /dev/null
+++ b/content/en/community/static/community-values.md
@@ -0,0 +1,28 @@
+
+
+# Kubernetes Community Values
+
+Kubernetes Community culture is frequently cited as a substantial contributor to the meteoric rise of this Open Source project. Below are the distilled values which have evolved over the last many years in our community pushing our project and peers toward constant improvement.
+
+## Distribution is better than centralization
+
+The scale of the Kubernetes project is only viable through high-trust and high-visibility distribution of work, which includes delegation of authority, decision making, technical design, code ownership, and documentation. Distributed asynchronous ownership, collaboration, communication and decision making are the cornerstone of our world-wide community.
+
+## Community over product or company
+
+We are here as a community first, our allegiance is to the intentional stewardship of the Kubernetes project for the benefit of all its members and users everywhere. We support working together publicly for the common goal of a vibrant interoperable ecosystem providing an excellent experience for our users. Individuals gain status through work, companies gain status through their commitments to support this community and fund the resources necessary for the project to operate.
+
+## Automation over process
+
+Large projects have a lot of less exciting, yet, hard work. We value time spent automating repetitive work more highly than toil. Where that work cannot be automated, it is our culture to recognize and reward all types of contributions. However, heroism is not sustainable.
+
+## Inclusive is better than exclusive
+
+Broadly successful and useful technology requires different perspectives and skill sets which can only be heard in a welcoming and respectful environment. Community membership is a privilege, not a right. Community Leadership is earned through effort, scope, quality, quantity, and duration of contributions. Our community shows respect for the time and effort put into a discussion regardless of where a contributor is on their growth path.
+
+## Evolution is better than stagnation
+
+Openness to new ideas and studied technological evolution make Kubernetes a stronger project. Continual improvement, servant leadership, mentorship and respect are the foundations of the Kubernetes project culture. It is the duty for leaders in the Kubernetes community to find, sponsor, and promote new community members. Leaders should expect to step aside. Community members should expect to step up.
+
+**"Culture eats strategy for breakfast." --Peter Drucker**
diff --git a/content/en/community/values.md b/content/en/community/values.md
new file mode 100644
index 0000000000000..4ae1fe30b6d55
--- /dev/null
+++ b/content/en/community/values.md
@@ -0,0 +1,13 @@
+---
+title: Community
+layout: basic
+cid: community
+css: /css/community.css
+---
+
+
+
+
+{{< include "/static/community-values.md" >}}
+
+
diff --git a/content/en/docs/concepts/cluster-administration/flow-control.md b/content/en/docs/concepts/cluster-administration/flow-control.md
index 6cb4c386d3616..e8fd3e4061a88 100644
--- a/content/en/docs/concepts/cluster-administration/flow-control.md
+++ b/content/en/docs/concepts/cluster-administration/flow-control.md
@@ -59,7 +59,7 @@ kube-apiserver \
```
Alternatively, you can enable the v1alpha1 version of the API group
-with `--runtime-config=flowcontrol.apiserver.k8s.io/v1beta1=true`.
+with `--runtime-config=flowcontrol.apiserver.k8s.io/v1alpha1=true`.
The command-line flag `--enable-priority-and-fairness=false` will disable the
API Priority and Fairness feature, even if other flags have enabled it.
diff --git a/content/en/docs/concepts/cluster-administration/manage-deployment.md b/content/en/docs/concepts/cluster-administration/manage-deployment.md
index d9814a887a147..fa31fbd35f599 100644
--- a/content/en/docs/concepts/cluster-administration/manage-deployment.md
+++ b/content/en/docs/concepts/cluster-administration/manage-deployment.md
@@ -45,7 +45,7 @@ kubectl apply -f https://k8s.io/examples/application/nginx/
`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.
-It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, then you can then simply deploy all of the components of your stack en masse.
+It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, you can deploy all of the components of your stack together.
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:
@@ -265,7 +265,7 @@ For a more concrete example, check the [tutorial of deploying Ghost](https://git
## Updating labels
Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`.
-For example, if you want to label all your nginx pods as frontend tier, simply run:
+For example, if you want to label all your nginx pods as frontend tier, run:
```shell
kubectl label pods -l app=nginx tier=fe
@@ -411,7 +411,7 @@ and
## Disruptive updates
-In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file:
+In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a Deployment. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can modify your original configuration file:
```shell
kubectl replace -f https://k8s.io/examples/application/nginx/nginx-deployment.yaml --force
@@ -448,7 +448,7 @@ kubectl scale deployment my-nginx --current-replicas=1 --replicas=3
deployment.apps/my-nginx scaled
```
-To update to version 1.16.1, simply change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`, with the kubectl commands we learned above.
+To update to version 1.16.1, change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1` using the previous kubectl commands.
```shell
kubectl edit deployment/my-nginx
diff --git a/content/en/docs/concepts/configuration/configmap.md b/content/en/docs/concepts/configuration/configmap.md
index 9a134dfc9901c..ad735388a0d57 100644
--- a/content/en/docs/concepts/configuration/configmap.md
+++ b/content/en/docs/concepts/configuration/configmap.md
@@ -225,7 +225,7 @@ The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync
However, the kubelet uses its local cache for getting the current value of the ConfigMap.
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
-A ConfigMap can be either propagated by watch (default), ttl-based, or simply redirecting
+A ConfigMap can be either propagated by watch (default), ttl-based, or by redirecting
all requests directly to the API server.
As a result, the total delay from the moment when the ConfigMap is updated to the moment
when new keys are projected to the Pod can be as long as the kubelet sync period + cache
diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md
index 5a6a0dd09239e..45cf9297ca804 100644
--- a/content/en/docs/concepts/configuration/secret.md
+++ b/content/en/docs/concepts/configuration/secret.md
@@ -669,7 +669,7 @@ The kubelet checks whether the mounted secret is fresh on every periodic sync.
However, the kubelet uses its local cache for getting the current value of the Secret.
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
the [KubeletConfiguration struct](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go).
-A Secret can be either propagated by watch (default), ttl-based, or simply redirecting
+A Secret can be either propagated by watch (default), ttl-based, or by redirecting
all requests directly to the API server.
As a result, the total delay from the moment when the Secret is updated to the moment
when new keys are projected to the Pod can be as long as the kubelet sync period + cache
diff --git a/content/en/docs/concepts/containers/container-lifecycle-hooks.md b/content/en/docs/concepts/containers/container-lifecycle-hooks.md
index b315ba6f597ad..96569f95189cf 100644
--- a/content/en/docs/concepts/containers/container-lifecycle-hooks.md
+++ b/content/en/docs/concepts/containers/container-lifecycle-hooks.md
@@ -36,10 +36,13 @@ No parameters are passed to the handler.
`PreStop`
-This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state.
-It is blocking, meaning it is synchronous,
-so it must complete before the signal to stop the container can be sent.
-No parameters are passed to the handler.
+This hook is called immediately before a container is terminated due to an API request or management
+event such as a liveness/startup probe failure, preemption, resource contention and others. A call
+to the `PreStop` hook fails if the container is already in a terminated or completed state and the
+hook must complete before the TERM signal to stop the container can be sent. The Pod's termination
+grace period countdown begins before the `PreStop` hook is executed, so regardless of the outcome of
+the handler, the container will eventually terminate within the Pod's termination grace period. No
+parameters are passed to the handler.
A more detailed description of the termination behavior can be found in
[Termination of Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination).
@@ -65,19 +68,15 @@ the Container ENTRYPOINT and hook fire asynchronously.
However, if the hook takes too long to run or hangs,
the Container cannot reach a `running` state.
-`PreStop` hooks are not executed asynchronously from the signal
-to stop the Container; the hook must complete its execution before
-the signal can be sent.
-If a `PreStop` hook hangs during execution,
-the Pod's phase will be `Terminating` and remain there until the Pod is
-killed after its `terminationGracePeriodSeconds` expires.
-This grace period applies to the total time it takes for both
-the `PreStop` hook to execute and for the Container to stop normally.
-If, for example, `terminationGracePeriodSeconds` is 60, and the hook
-takes 55 seconds to complete, and the Container takes 10 seconds to stop
-normally after receiving the signal, then the Container will be killed
-before it can stop normally, since `terminationGracePeriodSeconds` is
-less than the total time (55+10) it takes for these two things to happen.
+`PreStop` hooks are not executed asynchronously from the signal to stop the Container; the hook must
+complete its execution before the TERM signal can be sent. If a `PreStop` hook hangs during
+execution, the Pod's phase will be `Terminating` and remain there until the Pod is killed after its
+`terminationGracePeriodSeconds` expires. This grace period applies to the total time it takes for
+both the `PreStop` hook to execute and for the Container to stop normally. If, for example,
+`terminationGracePeriodSeconds` is 60, and the hook takes 55 seconds to complete, and the Container
+takes 10 seconds to stop normally after receiving the signal, then the Container will be killed
+before it can stop normally, since `terminationGracePeriodSeconds` is less than the total time
+(55+10) it takes for these two things to happen.
If either a `PostStart` or `PreStop` hook fails,
it kills the Container.
diff --git a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md
index dcfef3f6b6180..5457dc92048c1 100644
--- a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md
+++ b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md
@@ -31,7 +31,7 @@ Once a custom resource is installed, users can create and access its objects usi
## Custom controllers
-On their own, custom resources simply let you store and retrieve structured data.
+On their own, custom resources let you store and retrieve structured data.
When you combine a custom resource with a *custom controller*, custom resources
provide a true _declarative API_.
@@ -120,7 +120,7 @@ Kubernetes provides two ways to add custom resources to your cluster:
Kubernetes provides these two options to meet the needs of different users, so that neither ease of use nor flexibility is compromised.
-Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, it simply appears that the Kubernetes API is extended.
+Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called [API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) (AA). To users, the Kubernetes API appears extended.
CRDs allow users to create new types of resources without adding another API server. You do not need to understand API Aggregation to use CRDs.
diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
index 7b53fa326f3d5..0ec8bf81b1d00 100644
--- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
+++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
@@ -24,7 +24,7 @@ Network plugins in Kubernetes come in a few flavors:
The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it finds, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as CRI manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:
* `cni-bin-dir`: Kubelet probes this directory for plugins on startup
-* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni".
+* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is `cni`.
## Network Plugin Requirements
diff --git a/content/en/docs/concepts/extend-kubernetes/service-catalog.md b/content/en/docs/concepts/extend-kubernetes/service-catalog.md
index 3aa967578841c..af0271d9aba70 100644
--- a/content/en/docs/concepts/extend-kubernetes/service-catalog.md
+++ b/content/en/docs/concepts/extend-kubernetes/service-catalog.md
@@ -26,7 +26,7 @@ Fortunately, there is a cloud provider that offers message queuing as a managed
A cluster operator can setup Service Catalog and use it to communicate with the cloud provider's service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster.
The application developer therefore does not need to be concerned with the implementation details or management of the message queue.
-The application can simply use it as a service.
+The application can access the message queue as a service.
## Architecture
diff --git a/content/en/docs/concepts/overview/working-with-objects/labels.md b/content/en/docs/concepts/overview/working-with-objects/labels.md
index 2feec438011bf..7ff6f267a0a1f 100644
--- a/content/en/docs/concepts/overview/working-with-objects/labels.md
+++ b/content/en/docs/concepts/overview/working-with-objects/labels.md
@@ -98,7 +98,7 @@ For both equality-based and set-based conditions there is no logical _OR_ (`||`)
### _Equality-based_ requirement
_Equality-_ or _inequality-based_ requirements allow filtering by label keys and values. Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well.
-Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are simply synonyms), while the latter represents _inequality_. For example:
+Three kinds of operators are admitted `=`,`==`,`!=`. The first two represent _equality_ (and are synonyms), while the latter represents _inequality_. For example:
```
environment = production
diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md
index 17f30906bfb7b..f355a8f539b65 100644
--- a/content/en/docs/concepts/policy/pod-security-policy.md
+++ b/content/en/docs/concepts/policy/pod-security-policy.md
@@ -197,7 +197,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n
### Create a policy and a pod
Define the example PodSecurityPolicy object in a file. This is a policy that
-simply prevents the creation of privileged pods.
+prevents the creation of privileged pods.
The name of a PodSecurityPolicy object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md
index 0edb1be338047..d5c94d99cf559 100644
--- a/content/en/docs/concepts/policy/resource-quotas.md
+++ b/content/en/docs/concepts/policy/resource-quotas.md
@@ -610,17 +610,28 @@ plugins:
values: ["cluster-services"]
```
-Now, "cluster-services" pods will be allowed in only those namespaces where a quota object with a matching `scopeSelector` is present.
-For example:
+Then, create a resource quota object in the `kube-system` namespace:
-```yaml
- scopeSelector:
- matchExpressions:
- - scopeName: PriorityClass
- operator: In
- values: ["cluster-services"]
+{{< codenew file="policy/priority-class-resourcequota.yaml" >}}
+
+```shell
+$ kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system
+```
+
+```
+resourcequota/pods-cluster-services created
```
+In this case, a pod creation will be allowed if:
+
+1. the Pod's `priorityClassName` is not specified.
+1. the Pod's `priorityClassName` is specified to a value other than `cluster-services`.
+1. the Pod's `priorityClassName` is set to `cluster-services`, it is to be created
+ in the `kube-system` namespace, and it has passed the resource quota check.
+
+A Pod creation request is rejected if its `priorityClassName` is set to `cluster-services`
+and it is to be created in a namespace other than `kube-system`.
+
## {{% heading "whatsnext" %}}
- See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.
diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
index abe4f4b9eb84b..b6b2bd79a8114 100644
--- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
+++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
@@ -261,7 +261,7 @@ for performance and security reasons, there are some constraints on topologyKey:
and `preferredDuringSchedulingIgnoredDuringExecution`.
2. For pod anti-affinity, empty `topologyKey` is also not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
and `preferredDuringSchedulingIgnoredDuringExecution`.
-3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or simply disable it.
+3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or disable it.
4. Except for the above cases, the `topologyKey` can be any legal label-key.
In addition to `labelSelector` and `topologyKey`, you can optionally specify a list `namespaces`
diff --git a/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md b/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
index 932e076dfca76..7936f9dedc662 100644
--- a/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
+++ b/content/en/docs/concepts/scheduling-eviction/scheduler-perf-tuning.md
@@ -107,7 +107,7 @@ value being calculated based on the cluster size. There is also a hardcoded
minimum value of 50 nodes.
{{< note >}}In clusters with less than 50 feasible nodes, the scheduler still
-checks all the nodes, simply because there are not enough feasible nodes to stop
+checks all the nodes because there are not enough feasible nodes to stop
the scheduler's search early.
In a small cluster, if you set a low value for `percentageOfNodesToScore`, your
diff --git a/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md b/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md
index ae32f840fd8d8..06ed901c2a8bb 100644
--- a/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md
+++ b/content/en/docs/concepts/scheduling-eviction/scheduling-framework.md
@@ -183,7 +183,7 @@ the three things:
{{< note >}}
While any plugin can access the list of "waiting" Pods and approve them
-(see [`FrameworkHandle`](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20180409-scheduling-framework.md#frameworkhandle)), we expect only the permit
+(see [`FrameworkHandle`](https://git.k8s.io/enhancements/keps/sig-scheduling/624-scheduling-framework#frameworkhandle)), we expect only the permit
plugins to approve binding of reserved Pods that are in "waiting" state. Once a Pod
is approved, it is sent to the [PreBind](#pre-bind) phase.
{{< /note >}}
diff --git a/content/en/docs/concepts/security/controlling-access.md b/content/en/docs/concepts/security/controlling-access.md
index 62dc273cf7020..e025ac10e3d98 100644
--- a/content/en/docs/concepts/security/controlling-access.md
+++ b/content/en/docs/concepts/security/controlling-access.md
@@ -28,7 +28,7 @@ a private certificate authority (CA), or based on a public key infrastructure li
to a generally recognized CA.
If your cluster uses a private certificate authority, you need a copy of that CA
-certifcate configured into your `~/.kube/config` on the client, so that you can
+certificate configured into your `~/.kube/config` on the client, so that you can
trust the connection and be confident it was not intercepted.
Your client can present a TLS client certificate at this stage.
@@ -135,7 +135,7 @@ for the corresponding API object, and then written to the object store (shown as
The previous discussion applies to requests sent to the secure port of the API server
(the typical case). The API server can actually serve on 2 ports:
-By default the Kubernetes API server serves HTTP on 2 ports:
+By default, the Kubernetes API server serves HTTP on 2 ports:
1. `localhost` port:
diff --git a/content/en/docs/concepts/security/overview.md b/content/en/docs/concepts/security/overview.md
index fe9129c109bd8..b23a07c79ab2d 100644
--- a/content/en/docs/concepts/security/overview.md
+++ b/content/en/docs/concepts/security/overview.md
@@ -120,6 +120,7 @@ Area of Concern for Containers | Recommendation |
Container Vulnerability Scanning and OS Dependency Security | As part of an image build step, you should scan your containers for known vulnerabilities.
Image Signing and Enforcement | Sign container images to maintain a system of trust for the content of your containers.
Disallow privileged users | When constructing containers, consult your documentation for how to create users inside of the containers that have the least level of operating system privilege necessary in order to carry out the goal of the container.
+Use container runtime with stronger isolation | Select [container runtime classes](/docs/concepts/containers/runtime-class/) that provider stronger isolation
## Code
@@ -152,3 +153,4 @@ Learn about related Kubernetes security topics:
* [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
* [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
* [Secrets in Kubernetes](/docs/concepts/configuration/secret/)
+* [Runtime class](/docs/concepts/containers/runtime-class)
diff --git a/content/en/docs/concepts/security/pod-security-standards.md b/content/en/docs/concepts/security/pod-security-standards.md
index 18c1b7e86269b..a3c9ee138e8fd 100644
--- a/content/en/docs/concepts/security/pod-security-standards.md
+++ b/content/en/docs/concepts/security/pod-security-standards.md
@@ -32,7 +32,7 @@ should range from highly restricted to highly flexible:
- **_Privileged_** - Unrestricted policy, providing the widest possible level of permissions. This
policy allows for known privilege escalations.
-- **_Baseline/Default_** - Minimally restrictive policy while preventing known privilege
+- **_Baseline_** - Minimally restrictive policy while preventing known privilege
escalations. Allows the default (minimally specified) Pod configuration.
- **_Restricted_** - Heavily restricted policy, following current Pod hardening best practices.
@@ -48,9 +48,9 @@ mechanisms (such as gatekeeper), the privileged profile may be an absence of app
rather than an instantiated policy. In contrast, for a deny-by-default mechanism (such as Pod
Security Policy) the privileged policy should enable all controls (disable all restrictions).
-### Baseline/Default
+### Baseline
-The Baseline/Default policy is aimed at ease of adoption for common containerized workloads while
+The Baseline policy is aimed at ease of adoption for common containerized workloads while
preventing known privilege escalations. This policy is targeted at application operators and
developers of non-critical applications. The following listed controls should be
enforced/disallowed:
@@ -115,7 +115,9 @@ enforced/disallowed:
AppArmor (optional)
- On supported hosts, the 'runtime/default' AppArmor profile is applied by default. The default policy should prevent overriding or disabling the policy, or restrict overrides to an allowed set of profiles.
+ On supported hosts, the 'runtime/default' AppArmor profile is applied by default.
+ The baseline policy should prevent overriding or disabling the default AppArmor
+ profile, or restrict overrides to an allowed set of profiles.
Restricted Fields:
metadata.annotations['container.apparmor.security.beta.kubernetes.io/*']
Allowed Values: 'runtime/default', undefined
@@ -175,7 +177,7 @@ well as lower-trust users.The following listed controls should be enforced/disal
Policy
- Everything from the default profile.
+ Everything from the baseline profile.
Volume Types
@@ -275,7 +277,7 @@ of individual policies are not defined here.
## FAQ
-### Why isn't there a profile between privileged and default?
+### Why isn't there a profile between privileged and baseline?
The three profiles defined here have a clear linear progression from most secure (restricted) to least
secure (privileged), and cover a broad set of workloads. Privileges required above the baseline
diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md
index 93474f24fa021..f02aede0e18ee 100644
--- a/content/en/docs/concepts/services-networking/dns-pod-service.md
+++ b/content/en/docs/concepts/services-networking/dns-pod-service.md
@@ -25,9 +25,9 @@ assigned a DNS name. By default, a client Pod's DNS search list will
include the Pod's own namespace and the cluster's default domain. This is best
illustrated by example:
-Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
-in namespace `bar` can look up this service by simply doing a DNS query for
-`foo`. A Pod running in namespace `quux` can look up this service by doing a
+Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
+in namespace `bar` can look up this service by querying a DNS service for
+`foo`. A Pod running in namespace `quux` can look up this service by doing a
DNS query for `foo.bar`.
The following sections detail the supported record types and layout that is
diff --git a/content/en/docs/concepts/services-networking/dual-stack.md b/content/en/docs/concepts/services-networking/dual-stack.md
index d8b900847f31f..20cbcb5f33d55 100644
--- a/content/en/docs/concepts/services-networking/dual-stack.md
+++ b/content/en/docs/concepts/services-networking/dual-stack.md
@@ -163,7 +163,7 @@ status:
loadBalancer: {}
```
-1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-controller-manager) even though `.spec.ClusterIP` is set to `None`.
+1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to `None`.
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md
index 119958b91564d..d0405a060da06 100644
--- a/content/en/docs/concepts/services-networking/ingress-controllers.md
+++ b/content/en/docs/concepts/services-networking/ingress-controllers.md
@@ -49,6 +49,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress, designed as a library to build your custom proxy.
* The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an
ingress controller for the [Traefik](https://traefik.io/traefik/) proxy.
+* [Tyk Operator](https://github.com/TykTechnologies/tyk-operator) extends Ingress with Custom Resources to bring API Management capabilities to Ingress. Tyk Operator works with the Open Source Tyk Gateway & Tyk Cloud control plane.
* [Voyager](https://appscode.com/products/voyager) is an ingress controller for
[HAProxy](https://www.haproxy.org/#desc).
diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md
index 348c1616cedf7..bcbf7d7f75a46 100644
--- a/content/en/docs/concepts/services-networking/service.md
+++ b/content/en/docs/concepts/services-networking/service.md
@@ -430,7 +430,7 @@ Services by their DNS name.
For example, if you have a Service called `my-service` in a Kubernetes
namespace `my-ns`, the control plane and the DNS Service acting together
create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace
-should be able to find it by simply doing a name lookup for `my-service`
+should be able to find the service by doing a name lookup for `my-service`
(`my-service.my-ns` would also work).
Pods in other namespaces must qualify the name as `my-service.my-ns`. These names
@@ -463,7 +463,7 @@ selectors defined:
For headless Services that define selectors, the endpoints controller creates
`Endpoints` records in the API, and modifies the DNS configuration to return
-records (addresses) that point directly to the `Pods` backing the `Service`.
+A records (IP addresses) that point directly to the `Pods` backing the `Service`.
### Without selectors
@@ -1163,7 +1163,7 @@ rule kicks in, and redirects the packets to the proxy's own port.
The "Service proxy" chooses a backend, and starts proxying traffic from the client to the backend.
This means that Service owners can choose any port they want without risk of
-collision. Clients can simply connect to an IP and port, without being aware
+collision. Clients can connect to an IP and port, without being aware
of which Pods they are actually accessing.
#### iptables
diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md
index 3423c575db592..ef46b7f99aae4 100644
--- a/content/en/docs/concepts/storage/persistent-volumes.md
+++ b/content/en/docs/concepts/storage/persistent-volumes.md
@@ -487,7 +487,7 @@ The following volume types support mount options:
* VsphereVolume
* iSCSI
-Mount options are not validated, so mount will simply fail if one is invalid.
+Mount options are not validated. If a mount option is invalid, the mount fails.
In the past, the annotation `volume.beta.kubernetes.io/mount-options` was used instead
of the `mountOptions` attribute. This annotation is still working; however,
diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md
index e6846c7ea4cd4..6834977d70b1d 100644
--- a/content/en/docs/concepts/storage/storage-classes.md
+++ b/content/en/docs/concepts/storage/storage-classes.md
@@ -149,7 +149,7 @@ mount options specified in the `mountOptions` field of the class.
If the volume plugin does not support mount options but mount options are
specified, provisioning will fail. Mount options are not validated on either
-the class or PV, so mount of the PV will simply fail if one is invalid.
+the class or PV. If a mount option is invalid, the PV mount fails.
### Volume Binding Mode
diff --git a/content/en/docs/concepts/storage/volume-pvc-datasource.md b/content/en/docs/concepts/storage/volume-pvc-datasource.md
index ac8d16041da71..8210df661cb76 100644
--- a/content/en/docs/concepts/storage/volume-pvc-datasource.md
+++ b/content/en/docs/concepts/storage/volume-pvc-datasource.md
@@ -24,7 +24,7 @@ The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature add
A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume.
-The implementation of cloning, from the perspective of the Kubernetes API, simply adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
+The implementation of cloning, from the perspective of the Kubernetes API, adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
Users need to be aware of the following when using this feature:
diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md
index f8475d6284fdb..8d84a519c06f1 100644
--- a/content/en/docs/concepts/storage/volumes.md
+++ b/content/en/docs/concepts/storage/volumes.md
@@ -106,6 +106,8 @@ spec:
fsType: ext4
```
+If the EBS volume is partitioned, you can supply the optional field `partition: ""` to specify which parition to mount on.
+
#### AWS EBS CSI migration
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md
index 481c6f5017559..3624135a2044d 100644
--- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md
+++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md
@@ -90,6 +90,11 @@ If `startingDeadlineSeconds` is set to a large value or left unset (the default)
and if `concurrencyPolicy` is set to `Allow`, the jobs will always run
at least once.
+{{< caution >}}
+If `startingDeadlineSeconds` is set to a value less than 10 seconds, the CronJob may not be scheduled. This is because the CronJob controller checks things every 10 seconds.
+{{< /caution >}}
+
+
For every CronJob, the CronJob {{< glossary_tooltip term_id="controller" >}} checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error
````
@@ -128,4 +133,3 @@ documents the format of CronJob `schedule` fields.
For instructions on creating and working with cron jobs, and for an example of CronJob
manifest, see [Running automated tasks with cron jobs](/docs/tasks/job/automated-tasks-with-cron-jobs).
-
diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md
index d8e7646aed1ba..9ba746c2ada68 100644
--- a/content/en/docs/concepts/workloads/controllers/deployment.md
+++ b/content/en/docs/concepts/workloads/controllers/deployment.md
@@ -47,7 +47,7 @@ In this example:
* A Deployment named `nginx-deployment` is created, indicated by the `.metadata.name` field.
* The Deployment creates three replicated Pods, indicated by the `.spec.replicas` field.
* The `.spec.selector` field defines how the Deployment finds which Pods to manage.
- In this case, you simply select a label that is defined in the Pod template (`app: nginx`).
+ In this case, you select a label that is defined in the Pod template (`app: nginx`).
However, more sophisticated selection rules are possible,
as long as the Pod template itself satisfies the rule.
@@ -171,13 +171,15 @@ Follow the steps given below to update your Deployment:
```shell
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
```
- or simply use the following command:
-
+
+ or use the following command:
+
```shell
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
```
- The output is similar to this:
+ The output is similar to:
+
```
deployment.apps/nginx-deployment image updated
```
@@ -188,7 +190,8 @@ Follow the steps given below to update your Deployment:
kubectl edit deployment.v1.apps/nginx-deployment
```
- The output is similar to this:
+ The output is similar to:
+
```
deployment.apps/nginx-deployment edited
```
@@ -200,10 +203,13 @@ Follow the steps given below to update your Deployment:
```
The output is similar to this:
+
```
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
```
+
or
+
```
deployment "nginx-deployment" successfully rolled out
```
@@ -212,10 +218,11 @@ Get more details on your updated Deployment:
* After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`.
The output is similar to this:
- ```
- NAME READY UP-TO-DATE AVAILABLE AGE
- nginx-deployment 3/3 3 3 36s
- ```
+
+ ```ini
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ nginx-deployment 3/3 3 3 36s
+ ```
* Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it
up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md
index a6427bedb3073..23d87f81fd617 100644
--- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md
+++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md
@@ -180,16 +180,16 @@ delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl wi
for it to delete each pod before deleting the ReplicationController itself. If this kubectl
command is interrupted, it can be restarted.
-When using the REST API or go client library, you need to do the steps explicitly (scale replicas to
+When using the REST API or Go client library, you need to do the steps explicitly (scale replicas to
0, wait for pod deletions, then delete the ReplicationController).
-### Deleting just a ReplicationController
+### Deleting only a ReplicationController
You can delete a ReplicationController without affecting any of its pods.
Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
-When using the REST API or go client library, simply delete the ReplicationController object.
+When using the REST API or Go client library, you can delete the ReplicationController object.
Once the original is deleted, you can create a new ReplicationController to replace it. As long
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
@@ -240,7 +240,7 @@ Pods created by a ReplicationController are intended to be fungible and semantic
## Responsibilities of the ReplicationController
-The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
+The ReplicationController ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](https://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://issue.k8s.io/170)).
diff --git a/content/en/docs/concepts/workloads/pods/ephemeral-containers.md b/content/en/docs/concepts/workloads/pods/ephemeral-containers.md
index 49894937e3e00..011fa4adf4053 100644
--- a/content/en/docs/concepts/workloads/pods/ephemeral-containers.md
+++ b/content/en/docs/concepts/workloads/pods/ephemeral-containers.md
@@ -103,7 +103,7 @@ the ephemeral container to add as an `EphemeralContainers` list:
"apiVersion": "v1",
"kind": "EphemeralContainers",
"metadata": {
- "name": "example-pod"
+ "name": "example-pod"
},
"ephemeralContainers": [{
"command": [
diff --git a/content/en/docs/contribute/participate/roles-and-responsibilities.md b/content/en/docs/contribute/participate/roles-and-responsibilities.md
index 8ebe7a1303c98..4e8632ac0bb88 100644
--- a/content/en/docs/contribute/participate/roles-and-responsibilities.md
+++ b/content/en/docs/contribute/participate/roles-and-responsibilities.md
@@ -52,7 +52,7 @@ Members can:
{{< note >}}
Using `/lgtm` triggers automation. If you want to provide non-binding
- approval, simply commenting "LGTM" works too!
+ approval, commenting "LGTM" works too!
{{< /note >}}
- Use the `/hold` comment to block merging for a pull request
diff --git a/content/en/docs/contribute/style/style-guide.md b/content/en/docs/contribute/style/style-guide.md
index b4864dbabf095..2f0c39168ca7d 100644
--- a/content/en/docs/contribute/style/style-guide.md
+++ b/content/en/docs/contribute/style/style-guide.md
@@ -44,22 +44,25 @@ The English-language documentation uses U.S. English spelling and grammar.
### Use upper camel case for API objects
-When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal Case. When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
+When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal case. You may see different capitalization, such as "configMap", in the [API Reference](/docs/reference/kubernetes-api/). When writing general documentation, it's better to use upper camel case, calling it "ConfigMap" instead.
+
+When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
+
+You may use the word "resource", "API", or "object" to clarify a Kubernetes resource type in a sentence.
Don't split the API object name into separate words. For example, use
PodTemplateList, not Pod Template List.
-Refer to API objects without saying "object," unless omitting "object"
-leads to an awkward construction.
+The following examples focus on capitalization. Review the related guidance on [Code Style](#code-style-inline-code) for more information on formatting API objects.
-{{< table caption = "Do and Don't - API objects" >}}
+{{< table caption = "Do and Don't - Use Pascal case for API objects" >}}
Do | Don't
:--| :-----
-The pod has two containers. | The Pod has two containers.
-The HorizontalPodAutoscaler is responsible for ... | The HorizontalPodAutoscaler object is responsible for ...
-A PodList is a list of pods. | A Pod List is a list of pods.
-The two ContainerPorts ... | The two ContainerPort objects ...
-The two ContainerStateTerminated objects ... | The two ContainerStateTerminateds ...
+The HorizontalPodAutoscaler resource is responsible for ... | The Horizontal pod autoscaler is responsible for ...
+A PodList object is a list of pods. | A Pod List object is a list of pods.
+The Volume object contains a `hostPath` field. | The volume object contains a hostPath field.
+Every ConfigMap object is part of a namespace. | Every configMap object is part of a namespace.
+For managing confidential data, consider using the Secret API. | For managing confidential data, consider using the secret API.
{{< /table >}}
@@ -113,12 +116,12 @@ The copy is called a "fork". | The copy is called a "fork."
## Inline code formatting
-### Use code style for inline code, commands, and API objects
+### Use code style for inline code, commands, and API objects {#code-style-inline-code}
For inline code in an HTML document, use the `` tag. In a Markdown
document, use the backtick (`` ` ``).
-{{< table caption = "Do and Don't - Use code style for inline code and commands" >}}
+{{< table caption = "Do and Don't - Use code style for inline code, commands, and API objects" >}}
Do | Don't
:--| :-----
The `kubectl run` command creates a `Pod`. | The "kubectl run" command creates a pod.
@@ -573,6 +576,10 @@ Avoid making promises or giving hints about the future. If you need to talk abou
an alpha feature, put the text under a heading that identifies it as alpha
information.
+An exception to this rule is documentation about announced deprecations
+targeting removal in future versions. One example of documentation like this
+is the [Deprecated API migration guide](/docs/reference/using-api/deprecation-guide/).
+
### Avoid statements that will soon be out of date
Avoid words like "currently" and "new." A feature that is new today might not be
diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md
index 3ff113bb63a72..ef1d9a03a51af 100644
--- a/content/en/docs/reference/access-authn-authz/admission-controllers.md
+++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md
@@ -110,7 +110,7 @@ This admission controller allows all pods into the cluster. It is deprecated bec
This admission controller modifies every new Pod to force the image pull policy to Always. This is useful in a
multitenant cluster so that users can be assured that their private images can only be used by those
who have the credentials to pull them. Without this admission controller, once an image has been pulled to a
-node, any pod from any user can use it simply by knowing the image's name (assuming the Pod is
+node, any pod from any user can use it by knowing the image's name (assuming the Pod is
scheduled onto the right node), without any authorization check against the image. When this admission controller
is enabled, images are always pulled prior to starting containers, which means valid credentials are
required.
diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md
index 8c2c4fa5208ec..fc76cbb2f40f6 100644
--- a/content/en/docs/reference/access-authn-authz/authentication.md
+++ b/content/en/docs/reference/access-authn-authz/authentication.md
@@ -205,8 +205,10 @@ spec:
```
Service account bearer tokens are perfectly valid to use outside the cluster and
+
can be used to create identities for long standing jobs that wish to talk to the
-Kubernetes API. To manually create a service account, simply use the `kubectl
+Kubernetes API. To manually create a service account, simply use the `kubectl`
+
create serviceaccount (NAME)` command. This creates a service account in the
current namespace and an associated secret.
@@ -320,6 +322,7 @@ sequenceDiagram
8. Once authorized the API server returns a response to `kubectl`
9. `kubectl` provides feedback to the user
+
Since all of the data needed to validate who you are is in the `id_token`, Kubernetes doesn't need to
"phone home" to the identity provider. In a model where every request is stateless this provides a very scalable solution for authentication. It does offer a few challenges:
@@ -420,12 +423,12 @@ users:
refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq
name: oidc
```
-Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
+Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
##### Option 2 - Use the `--token` Option
-The `kubectl` command lets you pass in a token using the `--token` option. Simply copy and paste the `id_token` into this option:
+The `kubectl` command lets you pass in a token using the `--token` option. Copy and paste the `id_token` into this option:
```bash
kubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-va_B63jn-_n9LGSCca_6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TK_yF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a7_8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1_I2ulrOVsYx01_yD35-rw get nodes
@@ -731,7 +734,7 @@ to the impersonated user info.
The following HTTP headers can be used to performing an impersonation request:
* `Impersonate-User`: The username to act as.
-* `Impersonate-Group`: A group name to act as. Can be provided multiple times to set multiple groups. Optional. Requires "Impersonate-User"
+* `Impersonate-Group`: A group name to act as. Can be provided multiple times to set multiple groups. Optional. Requires "Impersonate-User".
* `Impersonate-Extra-( extra name )`: A dynamic header used to associate extra fields with the user. Optional. Requires "Impersonate-User". In order to be preserved consistently, `( extra name )` should be lower-case, and any characters which aren't [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6) MUST be utf8 and [percent-encoded](https://tools.ietf.org/html/rfc3986#section-2.1).
{{< note >}}
diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
index d9754afb5660c..67a833518260d 100644
--- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md
+++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
@@ -634,8 +634,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `KubeletCredentialProviders`: Enable kubelet exec credential providers for image pull credentials.
- `KubeletPluginsWatcher`: Enable probe-based plugin watcher utility to enable kubelet
to discover plugins such as [CSI volume drivers](/docs/concepts/storage/volumes/#csi).
-- `KubeletPodResources`: Enable the kubelet's pod resources GRPC endpoint. See
- [Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md)
+- `KubeletPodResources`: Enable the kubelet's pod resources gRPC endpoint. See
+ [Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/606-compute-device-assignment/README.md)
for more details.
- `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and
node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the
diff --git a/content/en/docs/reference/glossary/cluster-operator.md b/content/en/docs/reference/glossary/cluster-operator.md
index c8973438302c1..48bdd4d3dfbdf 100755
--- a/content/en/docs/reference/glossary/cluster-operator.md
+++ b/content/en/docs/reference/glossary/cluster-operator.md
@@ -17,6 +17,6 @@ tags:
Their primary responsibility is keeping a cluster up and running, which may involve periodic maintenance activities or upgrades.
{{< note >}}
-Cluster operators are different from the [Operator pattern](https://coreos.com/operators) that extends the Kubernetes API.
+Cluster operators are different from the [Operator pattern](https://www.openshift.com/learn/topics/operators) that extends the Kubernetes API.
{{< /note >}}
diff --git a/content/en/docs/reference/labels-annotations-taints.md b/content/en/docs/reference/labels-annotations-taints.md
index 78be058013e4f..8f4327cecaa48 100644
--- a/content/en/docs/reference/labels-annotations-taints.md
+++ b/content/en/docs/reference/labels-annotations-taints.md
@@ -114,3 +114,143 @@ The scheduler (through the _VolumeZonePredicate_ predicate) also will ensure tha
If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
+## node.kubernetes.io/windows-build {#nodekubernetesiowindows-build}
+
+Example: `node.kubernetes.io/windows-build=10.0.17763`
+
+Used on: Node
+
+When the kubelet is running on Microsoft Windows, it automatically labels its node to record the version of Windows Server in use.
+
+The label's value is in the format "MajorVersion.MinorVersion.BuildNumber".
+
+## service.kubernetes.io/headless {#servicekubernetesioheadless}
+
+Example: `service.kubernetes.io/headless=""`
+
+Used on: Service
+
+The control plane adds this label to an Endpoints object when the owning Service is headless.
+
+## kubernetes.io/service-name {#kubernetesioservice-name}
+
+Example: `kubernetes.io/service-name="nginx"`
+
+Used on: Service
+
+Kubernetes uses this label to differentiate multiple Services. Used currently for `ELB`(Elastic Load Balancer) only.
+
+## endpointslice.kubernetes.io/managed-by {#endpointslicekubernetesiomanaged-by}
+
+Example: `endpointslice.kubernetes.io/managed-by="controller"`
+
+Used on: EndpointSlices
+
+The label is used to indicate the controller or entity that manages an EndpointSlice. This label aims to enable different EndpointSlice objects to be managed by different controllers or entities within the same cluster.
+
+## endpointslice.kubernetes.io/skip-mirror {#endpointslicekubernetesioskip-mirror}
+
+Example: `endpointslice.kubernetes.io/skip-mirror="true"`
+
+Used on: Endpoints
+
+The label can be set to `"true"` on an Endpoints resource to indicate that the EndpointSliceMirroring controller should not mirror this resource with EndpointSlices.
+
+## service.kubernetes.io/service-proxy-name {#servicekubernetesioservice-proxy-name}
+
+Example: `service.kubernetes.io/service-proxy-name="foo-bar"`
+
+Used on: Service
+
+The kube-proxy has this label for custom proxy, which delegates service control to custom proxy.
+
+## experimental.windows.kubernetes.io/isolation-type
+
+Example: `experimental.windows.kubernetes.io/isolation-type: "hyperv"`
+
+Used on: Pod
+
+The annotation is used to run Windows containers with Hyper-V isolation. To use Hyper-V isolation feature and create a Hyper-V isolated container, the kubelet should be started with feature gates HyperVContainer=true and the Pod should include the annotation experimental.windows.kubernetes.io/isolation-type=hyperv.
+
+{{< note >}}
+You can only set this annotation on Pods that have a single container.
+{{< /note >}}
+
+## ingressclass.kubernetes.io/is-default-class
+
+Example: `ingressclass.kubernetes.io/is-default-class: "true"`
+
+Used on: IngressClass
+
+When a single IngressClass resource has this annotation set to `"true"`, new Ingress resource without a class specified will be assigned this default class.
+
+## kubernetes.io/ingress.class (deprecated)
+
+{{< note >}} Starting in v1.18, this annotation is deprecated in favor of `spec.ingressClassName`. {{< /note >}}
+
+## alpha.kubernetes.io/provided-node-ip
+
+Example: `alpha.kubernetes.io/provided-node-ip: "10.0.0.1"`
+
+Used on: Node
+
+The kubelet can set this annotation on a Node to denote its configured IPv4 address.
+
+When kubelet is started with the "external" cloud provider, it sets this annotation on the Node to denote an IP address set from the command line flag (`--node-ip`). This IP is verified with the cloud provider as valid by the cloud-controller-manager.
+
+**The taints listed below are always used on Nodes**
+
+## node.kubernetes.io/not-ready
+
+Example: `node.kubernetes.io/not-ready:NoExecute`
+
+The node controller detects whether a node is ready by monitoring its health and adds or removes this taint accordingly.
+
+## node.kubernetes.io/unreachable
+
+Example: `node.kubernetes.io/unreachable:NoExecute`
+
+The node controller adds the taint to a node corresponding to the [NodeCondition](/docs/concepts/architecture/nodes/#condition) `Ready` being `Unknown`.
+
+## node.kubernetes.io/unschedulable
+
+Example: `node.kubernetes.io/unschedulable:NoSchedule`
+
+The taint will be added to a node when initializing the node to avoid race condition.
+
+## node.kubernetes.io/memory-pressure
+
+Example: `node.kubernetes.io/memory-pressure:NoSchedule`
+
+The kubelet detects memory pressure based on `memory.available` and `allocatableMemory.available` observed on a Node. The observed values are then compared to the corresponding thresholds that can be set on the kubelet to determine if the Node condition and taint should be added/removed.
+
+## node.kubernetes.io/disk-pressure
+
+Example: `node.kubernetes.io/disk-pressure:NoSchedule`
+
+The kubelet detects disk pressure based on `imagefs.available`, `imagefs.inodesFree`, `nodefs.available` and `nodefs.inodesFree`(Linux only) observed on a Node. The observed values are then compared to the corresponding thresholds that can be set on the kubelet to determine if the Node condition and taint should be added/removed.
+
+## node.kubernetes.io/network-unavailable
+
+Example: `node.kubernetes.io/network-unavailable:NoSchedule`
+
+This is initially set by the kubelet when the cloud provider used indicates a requirement for additional network configuration. Only when the route on the cloud is configured properly will the taint be removed by the cloud provider.
+
+## node.kubernetes.io/pid-pressure
+
+Example: `node.kubernetes.io/pid-pressure:NoSchedule`
+
+The kubelet checks D-value of the size of `/proc/sys/kernel/pid_max` and the PIDs consumed by Kubernetes on a node to get the number of available PIDs that referred to as the `pid.available` metric. The metric is then compared to the corresponding threshold that can be set on the kubelet to determine if the node condition and taint should be added/removed.
+
+## node.cloudprovider.kubernetes.io/uninitialized
+
+Example: `node.cloudprovider.kubernetes.io/uninitialized:NoSchedule`
+
+Sets this taint on a node to mark it as unusable, when kubelet is started with the "external" cloud provider, until a controller from the cloud-controller-manager initializes this node, and then removes the taint.
+
+## node.cloudprovider.kubernetes.io/shutdown
+
+Example: `node.cloudprovider.kubernetes.io/shutdown:NoSchedule`
+
+If a Node is in a cloud provider specified shutdown state, the Node gets tainted accordingly with `node.cloudprovider.kubernetes.io/shutdown` and the taint effect of `NoSchedule`.
+
diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md
index e8b834c334a08..e517a13d52959 100644
--- a/content/en/docs/reference/using-api/api-concepts.md
+++ b/content/en/docs/reference/using-api/api-concepts.md
@@ -258,7 +258,7 @@ Accept: application/json;as=Table;g=meta.k8s.io;v=v1beta1, application/json
## Alternate representations of resources
-By default Kubernetes returns objects serialized to JSON with content type `application/json`. This is the default serialization format for the API. However, clients may request the more efficient Protobuf representation of these objects for better performance at scale. The Kubernetes API implements standard HTTP content type negotiation: passing an `Accept` header with a `GET` call will request that the server return objects in the provided content type, while sending an object in Protobuf to the server for a `PUT` or `POST` call takes the `Content-Type` header. The server will return a `Content-Type` header if the requested format is supported, or the `406 Not acceptable` error if an invalid content type is provided.
+By default, Kubernetes returns objects serialized to JSON with content type `application/json`. This is the default serialization format for the API. However, clients may request the more efficient Protobuf representation of these objects for better performance at scale. The Kubernetes API implements standard HTTP content type negotiation: passing an `Accept` header with a `GET` call will request that the server return objects in the provided content type, while sending an object in Protobuf to the server for a `PUT` or `POST` call takes the `Content-Type` header. The server will return a `Content-Type` header if the requested format is supported, or the `406 Not acceptable` error if an invalid content type is provided.
See the API documentation for a list of supported content types for each API.
@@ -560,4 +560,4 @@ If you request a a resourceVersion outside the applicable limit then, depending
### Unavailable resource versions
-Servers are not required to serve unrecognized resource versions. List and Get requests for unrecognized resource versions may wait briefly for the resource version to become available, should timeout with a `504 (Gateway Timeout)` if the provided resource versions does not become available in a resonable amount of time, and may respond with a `Retry-After` response header indicating how many seconds a client should wait before retrying the request. Currently the kube-apiserver also identifies these responses with a "Too large resource version" message. Watch requests for a unrecognized resource version may wait indefinitely (until the request timeout) for the resource version to become available.
+Servers are not required to serve unrecognized resource versions. List and Get requests for unrecognized resource versions may wait briefly for the resource version to become available, should timeout with a `504 (Gateway Timeout)` if the provided resource versions does not become available in a reasonable amount of time, and may respond with a `Retry-After` response header indicating how many seconds a client should wait before retrying the request. Currently, the kube-apiserver also identifies these responses with a "Too large resource version" message. Watch requests for an unrecognized resource version may wait indefinitely (until the request timeout) for the resource version to become available.
diff --git a/content/en/docs/reference/using-api/deprecation-guide.md b/content/en/docs/reference/using-api/deprecation-guide.md
new file mode 100755
index 0000000000000..ee8328cbddfa2
--- /dev/null
+++ b/content/en/docs/reference/using-api/deprecation-guide.md
@@ -0,0 +1,270 @@
+---
+reviewers:
+- liggitt
+- lavalamp
+- thockin
+- smarterclayton
+title: "Deprecated API Migration Guide"
+weight: 45
+content_type: reference
+---
+
+
+
+As the Kubernetes API evolves, APIs are periodically reorganized or upgraded.
+When APIs evolve, the old API is deprecated and eventually removed.
+This page contains information you need to know when migrating from
+deprecated API versions to newer and more stable API versions.
+
+
+
+## Removed APIs by release
+
+
+### v1.25
+
+The **v1.25** release will stop serving the following deprecated API versions:
+
+#### Event {#event-v125}
+
+The **events.k8s.io/v1beta1** API version of Event will no longer be served in v1.25.
+
+* Migrate manifests and API clients to use the **events.k8s.io/v1** API version, available since v1.19.
+* All existing persisted objects are accessible via the new API
+* Notable changes in **events.k8s.io/v1**:
+ * `type` is limited to `Normal` and `Warning`
+ * `involvedObject` is renamed to `regarding`
+ * `action`, `reason`, `reportingComponent`, and `reportingInstance` are required when creating new **events.k8s.io/v1** Events
+ * use `eventTime` instead of the deprecated `firstTimestamp` field (which is renamed to `deprecatedFirstTimestamp` and not permitted in new **events.k8s.io/v1** Events)
+ * use `series.lastObservedTime` instead of the deprecated `lastTimestamp` field (which is renamed to `deprecatedLastTimestamp` and not permitted in new **events.k8s.io/v1** Events)
+ * use `series.count` instead of the deprecated `count` field (which is renamed to `deprecatedCount` and not permitted in new **events.k8s.io/v1** Events)
+ * use `reportingComponent` instead of the deprecated `source.component` field (which is renamed to `deprecatedSource.component` and not permitted in new **events.k8s.io/v1** Events)
+ * use `reportingInstance` instead of the deprecated `source.host` field (which is renamed to `deprecatedSource.host` and not permitted in new **events.k8s.io/v1** Events)
+
+#### RuntimeClass {#runtimeclass-v125}
+
+RuntimeClass in the **node.k8s.io/v1beta1** API version will no longer be served in v1.25.
+
+* Migrate manifests and API clients to use the **node.k8s.io/v1** API version, available since v1.20.
+* All existing persisted objects are accessible via the new API
+* No notable changes
+
+### v1.22
+
+The **v1.22** release will stop serving the following deprecated API versions:
+
+#### Webhook resources {#webhook-resources-v122}
+
+The **admissionregistration.k8s.io/v1beta1** API version of MutatingWebhookConfiguration and ValidatingWebhookConfiguration will no longer be served in v1.22.
+
+* Migrate manifests and API clients to use the **admissionregistration.k8s.io/v1** API version, available since v1.16.
+* All existing persisted objects are accessible via the new APIs
+* Notable changes:
+ * `webhooks[*].failurePolicy` default changed from `Ignore` to `Fail` for v1
+ * `webhooks[*].matchPolicy` default changed from `Exact` to `Equivalent` for v1
+ * `webhooks[*].timeoutSeconds` default changed from `30s` to `10s` for v1
+ * `webhooks[*].sideEffects` default value is removed, and the field made required, and only `None` and `NoneOnDryRun` are permitted for v1
+ * `webhooks[*].admissionReviewVersions` default value is removed and the field made required for v1 (supported versions for AdmissionReview are `v1` and `v1beta1`)
+ * `webhooks[*].name` must be unique in the list for objects created via `admissionregistration.k8s.io/v1`
+
+#### CustomResourceDefinition {#customresourcedefinition-v122}
+
+The **apiextensions.k8s.io/v1beta1** API version of CustomResourceDefinition will no longer be served in v1.22.
+
+* Migrate manifests and API clients to use the **apiextensions.k8s.io/v1** API version, available since v1.16.
+* All existing persisted objects are accessible via the new API
+* Notable changes:
+ * `spec.scope` is no longer defaulted to `Namespaced` and must be explicitly specified
+ * `spec.version` is removed in v1; use `spec.versions` instead
+ * `spec.validation` is removed in v1; use `spec.versions[*].schema` instead
+ * `spec.subresources` is removed in v1; use `spec.versions[*].subresources` instead
+ * `spec.additionalPrinterColumns` is removed in v1; use `spec.versions[*].additionalPrinterColumns` instead
+ * `spec.conversion.webhookClientConfig` is moved to `spec.conversion.webhook.clientConfig` in v1
+ * `spec.conversion.conversionReviewVersions` is moved to `spec.conversion.webhook.conversionReviewVersions` in v1
+ * `spec.versions[*].schema.openAPIV3Schema` is now required when creating v1 CustomResourceDefinition objects, and must be a [structural schema](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema)
+ * `spec.preserveUnknownFields: true` is disallowed when creating v1 CustomResourceDefinition objects; it must be specified within schema definitions as `x-kubernetes-preserve-unknown-fields: true`
+ * In `additionalPrinterColumns` items, the `JSONPath` field was renamed to `jsonPath` in v1 (fixes [#66531](https://github.com/kubernetes/kubernetes/issues/66531))
+
+#### APIService {#apiservice-v122}
+
+The **apiregistration.k8s.io/v1beta1** API version of APIService will no longer be served in v1.22.
+
+* Migrate manifests and API clients to use the **apiregistration.k8s.io/v1** API version, available since v1.10.
+* All existing persisted objects are accessible via the new API
+* No notable changes
+
+#### TokenReview {#tokenreview-v122}
+
+The **authentication.k8s.io/v1beta1** API version of TokenReview will no longer be served in v1.22.
+
+* Migrate manifests and API clients to use the **authentication.k8s.io/v1** API version, available since v1.6.
+* No notable changes
+
+#### SubjectAccessReview resources {#subjectaccessreview-resources-v122}
+
+The **authorization.k8s.io/v1beta1** API version of LocalSubjectAccessReview, SelfSubjectAccessReview, and SubjectAccessReview will no longer be served in v1.22.
+
+* Migrate manifests and API clients to use the **authorization.k8s.io/v1** API version, available since v1.6.
+* Notable changes:
+ * `spec.group` was renamed to `spec.groups` in v1 (fixes [#32709](https://github.com/kubernetes/kubernetes/issues/32709))
+
+#### CertificateSigningRequest {#certificatesigningrequest-v122}
+
+The **certificates.k8s.io/v1beta1** API version of CertificateSigningRequest will no longer be served in v1.22.
+
+* Migrate manifests and API clients to use the **certificates.k8s.io/v1** API version, available since v1.19.
+* All existing persisted objects are accessible via the new API
+* Notable changes in `certificates.k8s.io/v1`:
+ * For API clients requesting certificates:
+ * `spec.signerName` is now required (see [known Kubernetes signers](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers)), and requests for `kubernetes.io/legacy-unknown` are not allowed to be created via the `certificates.k8s.io/v1` API
+ * `spec.usages` is now required, may not contain duplicate values, and must only contain known usages
+ * For API clients approving or signing certificates:
+ * `status.conditions` may not contain duplicate types
+ * `status.conditions[*].status` is now required
+ * `status.certificate` must be PEM-encoded, and contain only `CERTIFICATE` blocks
+
+#### Lease {#lease-v122}
+
+The **coordination.k8s.io/v1beta1** API version of Lease will no longer be served in v1.22.
+
+* Migrate manifests and API clients to use the **coordination.k8s.io/v1** API version, available since v1.14.
+* All existing persisted objects are accessible via the new API
+* No notable changes
+
+#### Ingress {#ingress-v122}
+
+The **extensions/v1beta1** and **networking.k8s.io/v1beta1** API versions of Ingress will no longer be served in v1.22.
+
+* Migrate manifests and API clients to use the **networking.k8s.io/v1** API version, available since v1.19.
+* All existing persisted objects are accessible via the new API
+* Notable changes:
+ * `spec.backend` is renamed to `spec.defaultBackend`
+ * The backend `serviceName` field is renamed to `service.name`
+ * Numeric backend `servicePort` fields are renamed to `service.port.number`
+ * String backend `servicePort` fields are renamed to `service.port.name`
+ * `pathType` is now required for each specified path. Options are `Prefix`, `Exact`, and `ImplementationSpecific`. To match the undefined `v1beta1` behavior, use `ImplementationSpecific`.
+
+#### IngressClass {#ingressclass-v122}
+
+The **networking.k8s.io/v1beta1** API version of IngressClass will no longer be served in v1.22.
+
+* Migrate manifests and API clients to use the **networking.k8s.io/v1** API version, available since v1.19.
+* All existing persisted objects are accessible via the new API
+* No notable changes
+
+#### RBAC resources {#rbac-resources-v122}
+
+The **rbac.authorization.k8s.io/v1beta1** API version of ClusterRole, ClusterRoleBinding, Role, and RoleBinding will no longer be served in v1.22.
+
+* Migrate manifests and API clients to use the **rbac.authorization.k8s.io/v1** API version, available since v1.8.
+* All existing persisted objects are accessible via the new APIs
+* No notable changes
+
+#### PriorityClass {#priorityclass-v122}
+
+The **scheduling.k8s.io/v1beta1** API version of PriorityClass will no longer be served in v1.22.
+
+* Migrate manifests and API clients to use the **scheduling.k8s.io/v1** API version, available since v1.14.
+* All existing persisted objects are accessible via the new API
+* No notable changes
+
+#### Storage resources {#storage-resources-v122}
+
+The **storage.k8s.io/v1beta1** API version of CSIDriver, CSINode, StorageClass, and VolumeAttachment will no longer be served in v1.22.
+
+* Migrate manifests and API clients to use the **storage.k8s.io/v1** API version
+ * CSIDriver is available in **storage.k8s.io/v1** since v1.19.
+ * CSINode is available in **storage.k8s.io/v1** since v1.17
+ * StorageClass is available in **storage.k8s.io/v1** since v1.6
+ * VolumeAttachment is available in **storage.k8s.io/v1** v1.13
+* All existing persisted objects are accessible via the new APIs
+* No notable changes
+
+### v1.16
+
+The **v1.16** release stopped serving the following deprecated API versions:
+
+#### NetworkPolicy {#networkpolicy-v116}
+
+The **extensions/v1beta1** API version of NetworkPolicy is no longer served as of v1.16.
+
+* Migrate manifests and API clients to use the **networking.k8s.io/v1** API version, available since v1.8.
+* All existing persisted objects are accessible via the new API
+
+#### DaemonSet {#daemonset-v116}
+
+The **extensions/v1beta1** and **apps/v1beta2** API versions of DaemonSet are no longer served as of v1.16.
+
+* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
+* All existing persisted objects are accessible via the new API
+* Notable changes:
+ * `spec.templateGeneration` is removed
+ * `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
+ * `spec.updateStrategy.type` now defaults to `RollingUpdate` (the default in `extensions/v1beta1` was `OnDelete`)
+
+#### Deployment {#deployment-v116}
+
+The **extensions/v1beta1**, **apps/v1beta1**, and **apps/v1beta2** API versions of Deployment are no longer served as of v1.16.
+
+* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
+* All existing persisted objects are accessible via the new API
+* Notable changes:
+ * `spec.rollbackTo` is removed
+ * `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
+ * `spec.progressDeadlineSeconds` now defaults to `600` seconds (the default in `extensions/v1beta1` was no deadline)
+ * `spec.revisionHistoryLimit` now defaults to `10` (the default in `apps/v1beta1` was `2`, the default in `extensions/v1beta1` was to retain all)
+ * `maxSurge` and `maxUnavailable` now default to `25%` (the default in `extensions/v1beta1` was `1`)
+
+#### StatefulSet {#statefulset-v116}
+
+The **apps/v1beta1** and **apps/v1beta2** API versions of StatefulSet are no longer served as of v1.16.
+
+* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
+* All existing persisted objects are accessible via the new API
+* Notable changes:
+ * `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
+ * `spec.updateStrategy.type` now defaults to `RollingUpdate` (the default in `apps/v1beta1` was `OnDelete`)
+
+#### ReplicaSet {#replicaset-v116}
+
+The **extensions/v1beta1**, **apps/v1beta1**, and **apps/v1beta2** API versions of ReplicaSet are no longer served as of v1.16.
+
+* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
+* All existing persisted objects are accessible via the new API
+* Notable changes:
+ * `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
+
+## What to do
+
+### Test with deprecated APIs disabled
+
+You can test your clusters by starting an API server with specific API versions disabled
+to simulate upcoming removals. Add the following flag to the API server startup arguments:
+
+`--runtime-config=/=false`
+
+For example:
+
+`--runtime-config=admissionregistration.k8s.io/v1beta1=false,apiextensions.k8s.io/v1beta1,...`
+
+### Locate use of deprecated APIs
+
+Use [client warnings, metrics, and audit information available in 1.19+](https://kubernetes.io/blog/2020/09/03/warnings/#deprecation-warnings)
+to locate use of deprecated APIs.
+
+### Migrate to non-deprecated APIs
+
+* Update custom integrations and controllers to call the non-deprecated APIs
+* Change YAML files to reference the non-deprecated APIs
+
+ You can use the `kubectl-convert` command (`kubectl convert` prior to v1.20)
+ to automatically convert an existing object:
+
+ `kubectl-convert -f --output-version /`.
+
+ For example, to convert an older Deployment to `apps/v1`, you can run:
+
+ `kubectl-convert -f ./my-deployment.yaml --output-version apps/v1`
+
+ Note that this may use non-ideal default values. To learn more about a specific
+ resource, check the Kubernetes [API reference](/docs/reference/kubernetes-api/).
diff --git a/content/en/docs/reference/using-api/server-side-apply.md b/content/en/docs/reference/using-api/server-side-apply.md
index 302ae94d8b358..d91497f8f2d96 100644
--- a/content/en/docs/reference/using-api/server-side-apply.md
+++ b/content/en/docs/reference/using-api/server-side-apply.md
@@ -16,10 +16,10 @@ min-kubernetes-server-version: 1.16
## Introduction
-Server Side Apply helps users and controllers manage their resources via
-declarative configurations. It allows them to create and/or modify their
+Server Side Apply helps users and controllers manage their resources through
+declarative configurations. Clients can create and modify their
[objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
-declaratively, simply by sending their fully specified intent.
+declaratively by sending their fully specified intent.
A fully specified intent is a partial object that only includes the fields and
values for which the user has an opinion. That intent either creates a new
diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md
index 59725188d806c..5aa4ac894e2e6 100644
--- a/content/en/docs/setup/production-environment/container-runtimes.md
+++ b/content/en/docs/setup/production-environment/container-runtimes.md
@@ -420,7 +420,7 @@ Start CRI-O:
```shell
sudo systemctl daemon-reload
-sudo systemctl start crio
+sudo systemctl enable crio --now
```
Refer to the [CRI-O installation guide](https://github.com/cri-o/cri-o/blob/master/install.md)
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md b/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md
index 1bcdad0092915..1dd44e9b0b942 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/control-plane-flags.md
@@ -78,7 +78,7 @@ kind: ClusterConfiguration
kubernetesVersion: v1.16.0
scheduler:
extraArgs:
- address: 0.0.0.0
+ bind-address: 0.0.0.0
config: /home/johndoe/schedconfig.yaml
kubeconfig: /home/johndoe/kubeconfig.yaml
```
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
index 4d932e3e05d2f..6516a1882569e 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
@@ -434,7 +434,7 @@ Now remove the node:
kubectl delete node
```
-If you wish to start over simply run `kubeadm init` or `kubeadm join` with the
+If you wish to start over, run `kubeadm init` or `kubeadm join` with the
appropriate arguments.
### Clean up the control plane
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
index 90f80db6cad3b..394820324d7a7 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
@@ -308,13 +308,6 @@ or `/etc/default/kubelet`(`/etc/sysconfig/kubelet` for RPMs), please remove it a
(stored in `/var/lib/kubelet/config.yaml` by default).
{{< /note >}}
-Restarting the kubelet is required:
-
-```bash
-sudo systemctl daemon-reload
-sudo systemctl restart kubelet
-```
-
The automatic detection of cgroup driver for other container runtimes
like CRI-O and containerd is work in progress.
diff --git a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
index 03ff2648164cd..014df012c785a 100644
--- a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
+++ b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md
@@ -547,7 +547,7 @@ Your main source of help for troubleshooting your Kubernetes cluster should star
1. After launching `start.ps1`, flanneld is stuck in "Waiting for the Network to be created"
- There are numerous reports of this [issue which are being investigated](https://github.com/coreos/flannel/issues/1066); most likely it is a timing issue for when the management IP of the flannel network is set. A workaround is to simply relaunch start.ps1 or relaunch it manually as follows:
+ There are numerous reports of this [issue](https://github.com/coreos/flannel/issues/1066); most likely it is a timing issue for when the management IP of the flannel network is set. A workaround is to relaunch start.ps1 or relaunch it manually as follows:
```powershell
PS C:> [Environment]::SetEnvironmentVariable("NODE_NAME", "")
diff --git a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md
index 6c9c05cc90cb6..9133a2de1b9a3 100644
--- a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md
+++ b/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md
@@ -23,7 +23,7 @@ Windows applications constitute a large portion of the services and applications
## Before you begin
* Create a Kubernetes cluster that includes a [master and a worker node running Windows Server](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes)
-* It is important to note that creating and deploying services and workloads on Kubernetes behaves in much the same way for Linux and Windows containers. [Kubectl commands](/docs/reference/kubectl/overview/) to interface with the cluster are identical. The example in the section below is provided simply to jumpstart your experience with Windows containers.
+* It is important to note that creating and deploying services and workloads on Kubernetes behaves in much the same way for Linux and Windows containers. [Kubectl commands](/docs/reference/kubectl/overview/) to interface with the cluster are identical. The example in the section below is provided to jumpstart your experience with Windows containers.
## Getting Started: Deploying a Windows container
diff --git a/content/en/docs/tasks/access-application-cluster/access-cluster.md b/content/en/docs/tasks/access-application-cluster/access-cluster.md
index 7f74320118a3d..400a54ffb23d5 100644
--- a/content/en/docs/tasks/access-application-cluster/access-cluster.md
+++ b/content/en/docs/tasks/access-application-cluster/access-cluster.md
@@ -280,7 +280,7 @@ at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-l
#### Manually constructing apiserver proxy URLs
-As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
+As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy`
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL.
diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-api.md b/content/en/docs/tasks/administer-cluster/access-cluster-api.md
index ffe200b118200..99fd3596b35a0 100644
--- a/content/en/docs/tasks/administer-cluster/access-cluster-api.md
+++ b/content/en/docs/tasks/administer-cluster/access-cluster-api.md
@@ -215,7 +215,7 @@ for i in ret.items:
#### Java client
-* To install the [Java Client](https://github.com/kubernetes-client/java), simply execute :
+To install the [Java Client](https://github.com/kubernetes-client/java), run:
```shell
# Clone java library
diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-services.md b/content/en/docs/tasks/administer-cluster/access-cluster-services.md
index c318a3df35388..f6ba4e4fc0f13 100644
--- a/content/en/docs/tasks/administer-cluster/access-cluster-services.md
+++ b/content/en/docs/tasks/administer-cluster/access-cluster-services.md
@@ -83,7 +83,7 @@ See [Access Clusters Using the Kubernetes API](/docs/tasks/administer-cluster/ac
#### Manually constructing apiserver proxy URLs
-As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
+As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you append to the service's proxy URL:
`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`[https:]service_name[:port_name]`*`/proxy`
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL.
diff --git a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md
index 9c08a2a4ad04d..0c7c3c3ca1e2f 100644
--- a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md
+++ b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md
@@ -32,7 +32,7 @@ for example, it might provision storage that is too expensive. If this is the ca
you can either change the default StorageClass or disable it completely to avoid
dynamic provisioning of storage.
-Simply deleting the default StorageClass may not work, as it may be re-created
+Deleting the default StorageClass may not work, as it may be re-created
automatically by the addon manager running in your cluster. Please consult the docs for your installation
for details about addon manager and how to disable individual addons.
diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md
index 2824cce64261e..40987152e8ab3 100644
--- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md
+++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md
@@ -201,6 +201,9 @@ allow.textmode=true
how.nice.to.look=fairlyNice
```
+When `kubectl` creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the `binaryData` field of the ConfigMap, and not in `data`. Both text and binary data sources can be combined in one ConfigMap.
+If you want to view the `binaryData` keys (and their values) in a ConfigMap, you can run `kubectl get configmap -o jsonpath='{.binaryData}' `.
+
Use the option `--from-env-file` to create a ConfigMap from an env-file, for example:
```shell
@@ -687,4 +690,3 @@ data:
* Follow a real world example of [Configuring Redis using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/).
-
diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md
index ca3d0b2966f50..d96a5c8270d7f 100644
--- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md
+++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md
@@ -23,16 +23,10 @@ authenticated by the apiserver as a particular User Account (currently this is
usually `admin`, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver.
When they do, they are authenticated as a particular Service Account (for example, `default`).
-
-
-
## {{% heading "prerequisites" %}}
-
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
-
-
## Use the Default Service Account to access the API server.
@@ -129,7 +123,7 @@ then you will see that a token has automatically been created and is referenced
You may use authorization plugins to [set permissions on service accounts](/docs/reference/access-authn-authz/rbac/#service-account-permissions).
-To use a non-default service account, simply set the `spec.serviceAccountName`
+To use a non-default service account, set the `spec.serviceAccountName`
field of a pod to the name of the service account you wish to use.
The service account has to exist at the time the pod is created, or it will be rejected.
diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
index ce0b5b3656a4c..32b857c156f95 100644
--- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
+++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
@@ -9,18 +9,13 @@ weight: 100
This page shows how to create a Pod that uses a Secret to pull an image from a
private Docker registry or repository.
-
-
## {{% heading "prerequisites" %}}
-
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
* To do this exercise, you need a
[Docker ID](https://docs.docker.com/docker-id/) and password.
-
-
## Log in to Docker
@@ -106,7 +101,8 @@ kubectl create secret docker-registry regcred --docker-server=` is your Private Docker Registry FQDN. (https://index.docker.io/v1/ for DockerHub)
+* `` is your Private Docker Registry FQDN.
+ Use `https://index.docker.io/v2/` for DockerHub.
* `` is your Docker username.
* `` is your Docker password.
* `` is your Docker email.
@@ -192,7 +188,8 @@ your.private.registry.example.com/janedoe/jdoe-private:v1
```
To pull the image from the private registry, Kubernetes needs credentials.
-The `imagePullSecrets` field in the configuration file specifies that Kubernetes should get the credentials from a Secret named `regcred`.
+The `imagePullSecrets` field in the configuration file specifies that
+Kubernetes should get the credentials from a Secret named `regcred`.
Create a Pod that uses your Secret, and verify that the Pod is running:
@@ -201,11 +198,8 @@ kubectl apply -f my-private-reg-pod.yaml
kubectl get pod private-reg
```
-
-
## {{% heading "whatsnext" %}}
-
* Learn more about [Secrets](/docs/concepts/configuration/secret/).
* Learn more about [using a private registry](/docs/concepts/containers/images/#using-a-private-registry).
* Learn more about [adding image pull secrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account).
@@ -213,5 +207,3 @@ kubectl get pod private-reg
* See [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core).
* See the `imagePullSecrets` field of [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).
-
-
diff --git a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
index 4fadbb3f42ddb..dc4fba348092e 100644
--- a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
+++ b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
@@ -12,16 +12,10 @@ What's Kompose? It's a conversion tool for all things compose (namely Docker Com
More information can be found on the Kompose website at [http://kompose.io](http://kompose.io).
-
-
-
## {{% heading "prerequisites" %}}
-
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
-
-
## Install Kompose
@@ -35,13 +29,13 @@ Kompose is released via GitHub on a three-week cycle, you can see all current re
```sh
# Linux
-curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-linux-amd64 -o kompose
+curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose
# macOS
-curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-darwin-amd64 -o kompose
+curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-darwin-amd64 -o kompose
# Windows
-curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-windows-amd64.exe -o kompose.exe
+curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-windows-amd64.exe -o kompose.exe
chmod +x kompose
sudo mv ./kompose /usr/local/bin/kompose
@@ -49,7 +43,6 @@ sudo mv ./kompose /usr/local/bin/kompose
Alternatively, you can download the [tarball](https://github.com/kubernetes/kompose/releases).
-
{{% /tab %}}
{{% tab name="Build from source" %}}
@@ -87,8 +80,8 @@ On macOS you can install latest release via [Homebrew](https://brew.sh):
```bash
brew install kompose
-
```
+
{{% /tab %}}
{{< /tabs >}}
@@ -97,111 +90,117 @@ brew install kompose
In just a few steps, we'll take you from Docker Compose to Kubernetes. All
you need is an existing `docker-compose.yml` file.
-1. Go to the directory containing your `docker-compose.yml` file. If you don't
- have one, test using this one.
-
- ```yaml
- version: "2"
-
- services:
-
- redis-master:
- image: k8s.gcr.io/redis:e2e
- ports:
- - "6379"
-
- redis-slave:
- image: gcr.io/google_samples/gb-redisslave:v3
- ports:
- - "6379"
- environment:
- - GET_HOSTS_FROM=dns
-
- frontend:
- image: gcr.io/google-samples/gb-frontend:v4
- ports:
- - "80:80"
- environment:
- - GET_HOSTS_FROM=dns
- labels:
- kompose.service.type: LoadBalancer
- ```
-
-2. Run the `kompose up` command to deploy to Kubernetes directly, or skip to
- the next step instead to generate a file to use with `kubectl`.
-
- ```bash
- $ kompose up
- We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application.
- If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead.
-
- INFO Successfully created Service: redis
- INFO Successfully created Service: web
- INFO Successfully created Deployment: redis
- INFO Successfully created Deployment: web
-
- Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details.
- ```
-
-3. To convert the `docker-compose.yml` file to files that you can use with
- `kubectl`, run `kompose convert` and then `kubectl apply -f `.
-
- ```bash
- $ kompose convert
- INFO Kubernetes file "frontend-service.yaml" created
- INFO Kubernetes file "redis-master-service.yaml" created
- INFO Kubernetes file "redis-slave-service.yaml" created
- INFO Kubernetes file "frontend-deployment.yaml" created
- INFO Kubernetes file "redis-master-deployment.yaml" created
- INFO Kubernetes file "redis-slave-deployment.yaml" created
- ```
-
- ```bash
- $ kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml
- service/frontend created
- service/redis-master created
- service/redis-slave created
- deployment.apps/frontend created
- deployment.apps/redis-master created
- deployment.apps/redis-slave created
- ```
-
- Your deployments are running in Kubernetes.
-
-4. Access your application.
-
- If you're already using `minikube` for your development process:
-
- ```bash
- $ minikube service frontend
- ```
-
- Otherwise, let's look up what IP your service is using!
-
- ```sh
- $ kubectl describe svc frontend
- Name: frontend
- Namespace: default
- Labels: service=frontend
- Selector: service=frontend
- Type: LoadBalancer
- IP: 10.0.0.183
- LoadBalancer Ingress: 192.0.2.89
- Port: 80 80/TCP
- NodePort: 80 31144/TCP
- Endpoints: 172.17.0.4:80
- Session Affinity: None
- No events.
-
- ```
-
- If you're using a cloud provider, your IP will be listed next to `LoadBalancer Ingress`.
-
- ```sh
- $ curl http://192.0.2.89
- ```
-
-
+1. Go to the directory containing your `docker-compose.yml` file. If you don't have one, test using this one.
+
+ ```yaml
+ version: "2"
+
+ services:
+
+ redis-master:
+ image: k8s.gcr.io/redis:e2e
+ ports:
+ - "6379"
+
+ redis-slave:
+ image: gcr.io/google_samples/gb-redisslave:v3
+ ports:
+ - "6379"
+ environment:
+ - GET_HOSTS_FROM=dns
+
+ frontend:
+ image: gcr.io/google-samples/gb-frontend:v4
+ ports:
+ - "80:80"
+ environment:
+ - GET_HOSTS_FROM=dns
+ labels:
+ kompose.service.type: LoadBalancer
+ ```
+
+2. To convert the `docker-compose.yml` file to files that you can use with
+ `kubectl`, run `kompose convert` and then `kubectl apply -f `.
+
+ ```bash
+ kompose convert
+ ```
+
+ The output is similar to:
+
+ ```none
+ INFO Kubernetes file "frontend-service.yaml" created
+ INFO Kubernetes file "frontend-service.yaml" created
+ INFO Kubernetes file "frontend-service.yaml" created
+ INFO Kubernetes file "redis-master-service.yaml" created
+ INFO Kubernetes file "redis-master-service.yaml" created
+ INFO Kubernetes file "redis-master-service.yaml" created
+ INFO Kubernetes file "redis-slave-service.yaml" created
+ INFO Kubernetes file "redis-slave-service.yaml" created
+ INFO Kubernetes file "redis-slave-service.yaml" created
+ INFO Kubernetes file "frontend-deployment.yaml" created
+ INFO Kubernetes file "frontend-deployment.yaml" created
+ INFO Kubernetes file "frontend-deployment.yaml" created
+ INFO Kubernetes file "redis-master-deployment.yaml" created
+ INFO Kubernetes file "redis-master-deployment.yaml" created
+ INFO Kubernetes file "redis-master-deployment.yaml" created
+ INFO Kubernetes file "redis-slave-deployment.yaml" created
+ INFO Kubernetes file "redis-slave-deployment.yaml" created
+ INFO Kubernetes file "redis-slave-deployment.yaml" created
+ ```
+
+ ```bash
+ kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,
+ ```
+
+ The output is similar to:
+
+ ```none
+ redis-master-deployment.yaml,redis-slave-deployment.yaml
+ service/frontend created
+ service/redis-master created
+ service/redis-slave created
+ deployment.apps/frontend created
+ deployment.apps/redis-master created
+ deployment.apps/redis-slave created
+ ```
+
+ Your deployments are running in Kubernetes.
+
+3. Access your application.
+
+ If you're already using `minikube` for your development process:
+
+ ```bash
+ minikube service frontend
+ ```
+
+ Otherwise, let's look up what IP your service is using!
+
+ ```sh
+ kubectl describe svc frontend
+ ```
+
+ ```none
+ Name: frontend
+ Namespace: default
+ Labels: service=frontend
+ Selector: service=frontend
+ Type: LoadBalancer
+ IP: 10.0.0.183
+ LoadBalancer Ingress: 192.0.2.89
+ Port: 80 80/TCP
+ NodePort: 80 31144/TCP
+ Endpoints: 172.17.0.4:80
+ Session Affinity: None
+ No events.
+ ```
+
+ If you're using a cloud provider, your IP will be listed next to `LoadBalancer Ingress`.
+
+ ```sh
+ curl http://192.0.2.89
+ ```
@@ -221,15 +220,17 @@ you need is an existing `docker-compose.yml` file.
Kompose has support for two providers: OpenShift and Kubernetes.
You can choose a targeted provider using global option `--provider`. If no provider is specified, Kubernetes is set by default.
-
## `kompose convert`
Kompose supports conversion of V1, V2, and V3 Docker Compose files into Kubernetes and OpenShift objects.
-### Kubernetes
+### Kubernetes `kompose convert` example
-```sh
-$ kompose --file docker-voting.yml convert
+```shell
+kompose --file docker-voting.yml convert
+```
+
+```none
WARN Unsupported key networks - ignoring
WARN Unsupported key build - ignoring
INFO Kubernetes file "worker-svc.yaml" created
@@ -242,16 +243,24 @@ INFO Kubernetes file "result-deployment.yaml" created
INFO Kubernetes file "vote-deployment.yaml" created
INFO Kubernetes file "worker-deployment.yaml" created
INFO Kubernetes file "db-deployment.yaml" created
+```
-$ ls
+```shell
+ls
+```
+
+```none
db-deployment.yaml docker-compose.yml docker-gitlab.yml redis-deployment.yaml result-deployment.yaml vote-deployment.yaml worker-deployment.yaml
db-svc.yaml docker-voting.yml redis-svc.yaml result-svc.yaml vote-svc.yaml worker-svc.yaml
```
You can also provide multiple docker-compose files at the same time:
-```sh
-$ kompose -f docker-compose.yml -f docker-guestbook.yml convert
+```shell
+kompose -f docker-compose.yml -f docker-guestbook.yml convert
+```
+
+```none
INFO Kubernetes file "frontend-service.yaml" created
INFO Kubernetes file "mlbparks-service.yaml" created
INFO Kubernetes file "mongodb-service.yaml" created
@@ -263,8 +272,13 @@ INFO Kubernetes file "mongodb-deployment.yaml" created
INFO Kubernetes file "mongodb-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "redis-master-deployment.yaml" created
INFO Kubernetes file "redis-slave-deployment.yaml" created
+```
-$ ls
+```shell
+ls
+```
+
+```none
mlbparks-deployment.yaml mongodb-service.yaml redis-slave-service.jsonmlbparks-service.yaml
frontend-deployment.yaml mongodb-claim0-persistentvolumeclaim.yaml redis-master-service.yaml
frontend-service.yaml mongodb-deployment.yaml redis-slave-deployment.yaml
@@ -273,10 +287,13 @@ redis-master-deployment.yaml
When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be over ridden by subsequent file.
-### OpenShift
+### OpenShift `kompose convert` example
```sh
-$ kompose --provider openshift --file docker-voting.yml convert
+kompose --provider openshift --file docker-voting.yml convert
+```
+
+```none
WARN [worker] Service cannot be created because of missing port.
INFO OpenShift file "vote-service.yaml" created
INFO OpenShift file "db-service.yaml" created
@@ -297,7 +314,10 @@ INFO OpenShift file "result-imagestream.yaml" created
It also supports creating buildconfig for build directive in a service. By default, it uses the remote repo for the current git branch as the source repo, and the current branch as the source branch for the build. You can specify a different source repo and branch using ``--build-repo`` and ``--build-branch`` options respectively.
```sh
-$ kompose --provider openshift --file buildconfig/docker-compose.yml convert
+kompose --provider openshift --file buildconfig/docker-compose.yml convert
+```
+
+```none
WARN [foo] Service cannot be created because of missing port.
INFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source.
INFO OpenShift file "foo-deploymentconfig.yaml" created
@@ -313,23 +333,31 @@ If you are manually pushing the OpenShift artifacts using ``oc create -f``, you
Kompose supports a straightforward way to deploy your "composed" application to Kubernetes or OpenShift via `kompose up`.
+### Kubernetes `kompose up` example
-### Kubernetes
-```sh
-$ kompose --file ./examples/docker-guestbook.yml up
+```shell
+kompose --file ./examples/docker-guestbook.yml up
+```
+
+```none
We are going to create Kubernetes deployments and services for your Dockerized application.
If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead.
-INFO Successfully created service: redis-master
-INFO Successfully created service: redis-slave
-INFO Successfully created service: frontend
+INFO Successfully created service: redis-master
+INFO Successfully created service: redis-slave
+INFO Successfully created service: frontend
INFO Successfully created deployment: redis-master
INFO Successfully created deployment: redis-slave
-INFO Successfully created deployment: frontend
+INFO Successfully created deployment: frontend
Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods' for details.
+```
+
+```shell
+kubectl get deployment,svc,pods
+```
-$ kubectl get deployment,svc,pods
+```none
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/frontend 1 1 1 1 4m
deployment.extensions/redis-master 1 1 1 1 4m
@@ -347,14 +375,19 @@ pod/redis-master-1432129712-63jn8 1/1 Running 0 4m
pod/redis-slave-2504961300-nve7b 1/1 Running 0 4m
```
-**Note**:
+{{< note >}}
- You must have a running Kubernetes cluster with a pre-configured kubectl context.
- Only deployments and services are generated and deployed to Kubernetes. If you need different kind of resources, use the `kompose convert` and `kubectl apply -f` commands instead.
+{{< /note >}}
-### OpenShift
-```sh
-$ kompose --file ./examples/docker-guestbook.yml --provider openshift up
+### OpenShift `kompose up` example
+
+```shell
+kompose --file ./examples/docker-guestbook.yml --provider openshift up
+```
+
+```none
We are going to create OpenShift DeploymentConfigs and Services for your Dockerized application.
If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead.
@@ -369,8 +402,13 @@ INFO Successfully created deployment: redis-master
INFO Successfully created ImageStream: redis-master
Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is' for details.
+```
-$ oc get dc,svc,is
+```shell
+oc get dc,svc,is
+```
+
+```none
NAME REVISION DESIRED CURRENT TRIGGERED BY
dc/frontend 0 1 0 config,image(frontend:v4)
dc/redis-master 0 1 0 config,image(redis-master:e2e)
@@ -385,16 +423,16 @@ is/redis-master 172.30.12.200:5000/fff/redis-master
is/redis-slave 172.30.12.200:5000/fff/redis-slave v1
```
-**Note**:
-
-- You must have a running OpenShift cluster with a pre-configured `oc` context (`oc login`)
+{{< note >}}
+You must have a running OpenShift cluster with a pre-configured `oc` context (`oc login`).
+{{< /note >}}
## `kompose down`
-Once you have deployed "composed" application to Kubernetes, `$ kompose down` will help you to take the application out by deleting its deployments and services. If you need to remove other resources, use the 'kubectl' command.
+Once you have deployed "composed" application to Kubernetes, `kompose down` will help you to take the application out by deleting its deployments and services. If you need to remove other resources, use the 'kubectl' command.
-```sh
-$ kompose --file docker-guestbook.yml down
+```shell
+kompose --file docker-guestbook.yml down
INFO Successfully deleted service: redis-master
INFO Successfully deleted deployment: redis-master
INFO Successfully deleted service: redis-slave
@@ -403,16 +441,16 @@ INFO Successfully deleted service: frontend
INFO Successfully deleted deployment: frontend
```
-**Note**:
-
-- You must have a running Kubernetes cluster with a pre-configured kubectl context.
+{{< note >}}
+You must have a running Kubernetes cluster with a pre-configured `kubectl` context.
+{{< /note >}}
## Build and Push Docker Images
Kompose supports both building and pushing Docker images. When using the `build` key within your Docker Compose file, your image will:
- - Automatically be built with Docker using the `image` key specified within your file
- - Be pushed to the correct Docker repository using local credentials (located at `.docker/config`)
+- Automatically be built with Docker using the `image` key specified within your file
+- Be pushed to the correct Docker repository using local credentials (located at `.docker/config`)
Using an [example Docker Compose file](https://raw.githubusercontent.com/kubernetes/kompose/master/examples/buildconfig/docker-compose.yml):
@@ -428,7 +466,7 @@ services:
Using `kompose up` with a `build` key:
```none
-$ kompose up
+kompose up
INFO Build key detected. Attempting to build and push image 'docker.io/foo/bar'
INFO Building image 'docker.io/foo/bar' from directory 'build'
INFO Image 'docker.io/foo/bar' from directory 'build' built successfully
@@ -448,10 +486,10 @@ In order to disable the functionality, or choose to use BuildConfig generation (
```sh
# Disable building/pushing Docker images
-$ kompose up --build none
+kompose up --build none
# Generate Build Config artifacts for OpenShift
-$ kompose up --provider openshift --build build-config
+kompose up --provider openshift --build build-config
```
## Alternative Conversions
@@ -459,45 +497,54 @@ $ kompose up --provider openshift --build build-config
The default `kompose` transformation will generate Kubernetes [Deployments](/docs/concepts/workloads/controllers/deployment/) and [Services](/docs/concepts/services-networking/service/), in yaml format. You have alternative option to generate json with `-j`. Also, you can alternatively generate [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/) objects, [Daemon Sets](/docs/concepts/workloads/controllers/daemonset/), or [Helm](https://github.com/helm/helm) charts.
```sh
-$ kompose convert -j
+kompose convert -j
INFO Kubernetes file "redis-svc.json" created
INFO Kubernetes file "web-svc.json" created
INFO Kubernetes file "redis-deployment.json" created
INFO Kubernetes file "web-deployment.json" created
```
+
The `*-deployment.json` files contain the Deployment objects.
```sh
-$ kompose convert --replication-controller
+kompose convert --replication-controller
INFO Kubernetes file "redis-svc.yaml" created
INFO Kubernetes file "web-svc.yaml" created
INFO Kubernetes file "redis-replicationcontroller.yaml" created
INFO Kubernetes file "web-replicationcontroller.yaml" created
```
-The `*-replicationcontroller.yaml` files contain the Replication Controller objects. If you want to specify replicas (default is 1), use `--replicas` flag: `$ kompose convert --replication-controller --replicas 3`
+The `*-replicationcontroller.yaml` files contain the Replication Controller objects. If you want to specify replicas (default is 1), use `--replicas` flag: `kompose convert --replication-controller --replicas 3`
-```sh
-$ kompose convert --daemon-set
+```shell
+kompose convert --daemon-set
INFO Kubernetes file "redis-svc.yaml" created
INFO Kubernetes file "web-svc.yaml" created
INFO Kubernetes file "redis-daemonset.yaml" created
INFO Kubernetes file "web-daemonset.yaml" created
```
-The `*-daemonset.yaml` files contain the Daemon Set objects
+The `*-daemonset.yaml` files contain the DaemonSet objects
-If you want to generate a Chart to be used with [Helm](https://github.com/kubernetes/helm) simply do:
+If you want to generate a Chart to be used with [Helm](https://github.com/kubernetes/helm) run:
-```sh
-$ kompose convert -c
+```shell
+kompose convert -c
+```
+
+```none
INFO Kubernetes file "web-svc.yaml" created
INFO Kubernetes file "redis-svc.yaml" created
INFO Kubernetes file "web-deployment.yaml" created
INFO Kubernetes file "redis-deployment.yaml" created
chart created in "./docker-compose/"
+```
-$ tree docker-compose/
+```shell
+tree docker-compose/
+```
+
+```none
docker-compose
├── Chart.yaml
├── README.md
@@ -578,7 +625,7 @@ If you want to create normal pods without controllers you can use `restart` cons
| `no` | Pod | `Never` |
{{< note >}}
-The controller object could be `deployment` or `replicationcontroller`, etc.
+The controller object could be `deployment` or `replicationcontroller`.
{{< /note >}}
For example, the `pival` service will become pod down here. This container calculated value of `pi`.
@@ -593,7 +640,7 @@ services:
restart: "on-failure"
```
-### Warning about Deployment Config's
+### Warning about Deployment Configurations
If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to "Recreate" instead of "RollingUpdate" (default). This is done to avoid multiple instances of a service from accessing a volume at the same time.
@@ -606,5 +653,3 @@ Please note that changing service name might break some `docker-compose` files.
Kompose supports Docker Compose versions: 1, 2 and 3. We have limited support on versions 2.1 and 3.2 due to their experimental nature.
A full list on compatibility between all three versions is listed in our [conversion document](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md) including a list of all incompatible Docker Compose keys.
-
-
diff --git a/content/en/docs/tasks/debug-application-cluster/debug-service.md b/content/en/docs/tasks/debug-application-cluster/debug-service.md
index 4ee9d6f490af3..3613e5b2cb9d4 100644
--- a/content/en/docs/tasks/debug-application-cluster/debug-service.md
+++ b/content/en/docs/tasks/debug-application-cluster/debug-service.md
@@ -18,10 +18,10 @@ you to figure out what's going wrong.
## Running commands in a Pod
For many steps here you will want to see what a Pod running in the cluster
-sees. The simplest way to do this is to run an interactive alpine Pod:
+sees. The simplest way to do this is to run an interactive busybox Pod:
```none
-kubectl run -it --rm --restart=Never alpine --image=alpine sh
+kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh
```
{{< note >}}
@@ -111,7 +111,7 @@ kubectl get pods -l app=hostnames \
10.244.0.7
```
-The example container used for this walk-through simply serves its own hostname
+The example container used for this walk-through serves its own hostname
via HTTP on port 9376, but if you are debugging your own app, you'll want to
use whatever port number your Pods are listening on.
@@ -421,7 +421,7 @@ Earlier you saw that the Pods were running. You can re-check that:
kubectl get pods -l app=hostnames
```
```none
-NAME READY STATUS RESTARTS AGE
+NAME READY STATUS RESTARTS AGE
hostnames-632524106-bbpiw 1/1 Running 0 1h
hostnames-632524106-ly40y 1/1 Running 0 1h
hostnames-632524106-tlaok 1/1 Running 0 1h
diff --git a/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md b/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md
index bec423043d7ce..28fd615b459c8 100644
--- a/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md
+++ b/content/en/docs/tasks/extend-kubectl/kubectl-plugins.md
@@ -12,20 +12,15 @@ content_type: task
This guide demonstrates how to install and write extensions for [kubectl](/docs/reference/kubectl/kubectl/). By thinking of core `kubectl` commands as essential building blocks for interacting with a Kubernetes cluster, a cluster administrator can think
of plugins as a means of utilizing these building blocks to create more complex behavior. Plugins extend `kubectl` with new sub-commands, allowing for new and custom features not included in the main distribution of `kubectl`.
-
-
## {{% heading "prerequisites" %}}
-
You need to have a working `kubectl` binary installed.
-
-
## Installing kubectl plugins
-A plugin is nothing more than a standalone executable file, whose name begins with `kubectl-`. To install a plugin, simply move its executable file to anywhere on your `PATH`.
+A plugin is a standalone executable file, whose name begins with `kubectl-`. To install a plugin, move its executable file to anywhere on your `PATH`.
You can also discover and install kubectl plugins available in the open source
using [Krew](https://krew.dev/). Krew is a plugin manager maintained by
@@ -60,9 +55,9 @@ You can write a plugin in any programming language or script that allows you to
There is no plugin installation or pre-loading required. Plugin executables receive
the inherited environment from the `kubectl` binary.
-A plugin determines which command path it wishes to implement based on its name. For
-example, a plugin wanting to provide a new command `kubectl foo`, would simply be named
-`kubectl-foo`, and live somewhere in your `PATH`.
+A plugin determines which command path it wishes to implement based on its name.
+For example, a plugin named `kubectl-foo` provides a command `kubectl foo`. You must
+install the plugin executable somewhere in your `PATH`.
### Example plugin
@@ -88,32 +83,34 @@ echo "I am a plugin named kubectl-foo"
### Using a plugin
-To use the above plugin, simply make it executable:
+To use a plugin, make the plugin executable:
-```
+```shell
sudo chmod +x ./kubectl-foo
```
and place it anywhere in your `PATH`:
-```
+```shell
sudo mv ./kubectl-foo /usr/local/bin
```
You may now invoke your plugin as a `kubectl` command:
-```
+```shell
kubectl foo
```
+
```
I am a plugin named kubectl-foo
```
All args and flags are passed as-is to the executable:
-```
+```shell
kubectl foo version
```
+
```
1.0.0
```
@@ -124,6 +121,7 @@ All environment variables are also passed as-is to the executable:
export KUBECONFIG=~/.kube/config
kubectl foo config
```
+
```
/home//.kube/config
```
@@ -131,6 +129,7 @@ kubectl foo config
```shell
KUBECONFIG=/etc/kube/config kubectl foo config
```
+
```
/etc/kube/config
```
@@ -376,16 +375,11 @@ set up a build environment (if it needs compiling), and deploy the plugin.
If you also make compiled packages available, or use Krew, that will make
installs easier.
-
-
## {{% heading "whatsnext" %}}
-
* Check the Sample CLI Plugin repository for a
[detailed example](https://github.com/kubernetes/sample-cli-plugin) of a
plugin written in Go.
In case of any questions, feel free to reach out to the
[SIG CLI team](https://github.com/kubernetes/community/tree/master/sig-cli).
* Read about [Krew](https://krew.dev/), a package manager for kubectl plugins.
-
-
diff --git a/content/en/docs/tasks/job/parallel-processing-expansion.md b/content/en/docs/tasks/job/parallel-processing-expansion.md
index e92fa9f5bb657..8f5994929e426 100644
--- a/content/en/docs/tasks/job/parallel-processing-expansion.md
+++ b/content/en/docs/tasks/job/parallel-processing-expansion.md
@@ -12,7 +12,7 @@ based on a common template. You can use this approach to process batches of work
parallel.
For this example there are only three items: _apple_, _banana_, and _cherry_.
-The sample Jobs process each item simply by printing a string then pausing.
+The sample Jobs process each item by printing a string then pausing.
See [using Jobs in real workloads](#using-jobs-in-real-workloads) to learn about how
this pattern fits more realistic use cases.
diff --git a/content/en/docs/tasks/run-application/delete-stateful-set.md b/content/en/docs/tasks/run-application/delete-stateful-set.md
index 57e54e679722a..a70e018b85701 100644
--- a/content/en/docs/tasks/run-application/delete-stateful-set.md
+++ b/content/en/docs/tasks/run-application/delete-stateful-set.md
@@ -66,7 +66,7 @@ Use caution when deleting a PVC, as it may lead to data loss.
### Complete deletion of a StatefulSet
-To simply delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following:
+To delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following:
```shell
grace=$(kubectl get pods --template '{{.spec.terminationGracePeriodSeconds}}')
diff --git a/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md b/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md
index cda469f217d46..28de1865fd937 100644
--- a/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md
+++ b/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md
@@ -46,7 +46,7 @@ before the kubelet deletes the name from the apiserver.
Kubernetes (versions 1.5 or newer) will not delete Pods just because a Node is unreachable.
The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a
-[timeout](/docs/concepts/architecture/nodes/#node-condition).
+[timeout](/docs/concepts/architecture/nodes/#condition).
Pods may also enter these states when the user attempts graceful deletion of a Pod
on an unreachable Node.
The only ways in which a Pod in such a state can be removed from the apiserver are as follows:
diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
index 56a48c8b308ac..763d9ab996624 100644
--- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
+++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
@@ -383,7 +383,12 @@ behavior:
periodSeconds: 60
```
-When the number of pods is more than 40 the second policy will be used for scaling down.
+`periodSeconds` indicates the length of time in the past for which the policy must hold true.
+The first policy _(Pods)_ allows at most 4 replicas to be scaled down in one minute. The second policy
+_(Percent)_ allows at most 10% of the current replicas to be scaled down in one minute.
+
+Since by default the policy which allows the highest amount of change is selected, the second policy will
+only be used when the number of pod replicas is more than 40. With 40 or less replicas, the first policy will be applied.
For instance if there are 80 replicas and the target has to be scaled down to 10 replicas
then during the first step 8 replicas will be reduced. In the next iteration when the number
of replicas is 72, 10% of the pods is 7.2 but the number is rounded up to 8. On each loop of
@@ -391,10 +396,6 @@ the autoscaler controller the number of pods to be change is re-calculated based
of current replicas. When the number of replicas falls below 40 the first policy _(Pods)_ is applied
and 4 replicas will be reduced at a time.
-`periodSeconds` indicates the length of time in the past for which the policy must hold true.
-The first policy allows at most 4 replicas to be scaled down in one minute. The second policy
-allows at most 10% of the current replicas to be scaled down in one minute.
-
The policy selection can be changed by specifying the `selectPolicy` field for a scaling
direction. By setting the value to `Min` which would select the policy which allows the
smallest change in the replica count. Setting the value to `Disabled` completely disables
@@ -441,7 +442,7 @@ behavior:
periodSeconds: 15
selectPolicy: Max
```
-For scaling down the stabilization window is _300_ seconds(or the value of the
+For scaling down the stabilization window is _300_ seconds (or the value of the
`--horizontal-pod-autoscaler-downscale-stabilization` flag if provided). There is only a single policy
for scaling down which allows a 100% of the currently running replicas to be removed which
means the scaling target can be scaled down to the minimum allowed replicas.
diff --git a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md
index f1738ff53eea9..36e5334f3d5b6 100644
--- a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md
+++ b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md
@@ -171,10 +171,10 @@ properties.
The script in the `init-mysql` container also applies either `primary.cnf` or
`replica.cnf` from the ConfigMap by copying the contents into `conf.d`.
Because the example topology consists of a single primary MySQL server and any number of
-replicas, the script simply assigns ordinal `0` to be the primary server, and everyone
+replicas, the script assigns ordinal `0` to be the primary server, and everyone
else to be replicas.
Combined with the StatefulSet controller's
-[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees/),
+[deployment order guarantee](/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees),
this ensures the primary MySQL server is Ready before creating replicas, so they can begin
replicating.
diff --git a/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md b/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md
index 4c43948a215c8..bdc3b0c524a4f 100644
--- a/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md
+++ b/content/en/docs/tasks/run-application/run-single-instance-stateful-application.md
@@ -65,6 +65,8 @@ for a secure solution.
kubectl describe deployment mysql
+ The output is similar to this:
+
Name: mysql
Namespace: default
CreationTimestamp: Tue, 01 Nov 2016 11:18:45 -0700
@@ -105,6 +107,8 @@ for a secure solution.
kubectl get pods -l app=mysql
+ The output is similar to this:
+
NAME READY STATUS RESTARTS AGE
mysql-63082529-2z3ki 1/1 Running 0 3m
@@ -112,6 +116,8 @@ for a secure solution.
kubectl describe pvc mysql-pv-claim
+ The output is similar to this:
+
Name: mysql-pv-claim
Namespace: default
StorageClass:
diff --git a/content/en/docs/tasks/run-application/run-stateless-application-deployment.md b/content/en/docs/tasks/run-application/run-stateless-application-deployment.md
index df604facf8b7a..62bd984ddc7e3 100644
--- a/content/en/docs/tasks/run-application/run-stateless-application-deployment.md
+++ b/content/en/docs/tasks/run-application/run-stateless-application-deployment.md
@@ -51,7 +51,6 @@ a Deployment that runs the nginx:1.14.2 Docker image:
The output is similar to this:
- user@computer:~/website$ kubectl describe deployment nginx-deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Tue, 30 Aug 2016 18:11:37 -0700
diff --git a/content/en/docs/tasks/tls/manual-rotation-of-ca-certificates.md b/content/en/docs/tasks/tls/manual-rotation-of-ca-certificates.md
index 3147ac3a18296..1720ab34a27e6 100644
--- a/content/en/docs/tasks/tls/manual-rotation-of-ca-certificates.md
+++ b/content/en/docs/tasks/tls/manual-rotation-of-ca-certificates.md
@@ -51,12 +51,12 @@ Configurations with a single API server will experience unavailability while the
If any pods are started before new CA is used by API servers, they will get this update and trust both old and new CAs.
```shell
- base64_encoded_ca="$(base64 )"
+ base64_encoded_ca="$(base64 -w0 )"
for namespace in $(kubectl get ns --no-headers | awk '{print $1}'); do
for token in $(kubectl get secrets --namespace "$namespace" --field-selector type=kubernetes.io/service-account-token -o name); do
kubectl get $token --namespace "$namespace" -o yaml | \
- /bin/sed "s/\(ca.crt:\).*/\1 ${base64_encoded_ca}" | \
+ /bin/sed "s/\(ca.crt:\).*/\1 ${base64_encoded_ca}/" | \
kubectl apply -f -
done
done
@@ -132,10 +132,10 @@ Configurations with a single API server will experience unavailability while the
1. If your cluster is using bootstrap tokens to join nodes, update the ConfigMap `cluster-info` in the `kube-public` namespace with new CA.
```shell
- base64_encoded_ca="$(base64 /etc/kubernetes/pki/ca.crt)"
+ base64_encoded_ca="$(base64 -w0 /etc/kubernetes/pki/ca.crt)"
kubectl get cm/cluster-info --namespace kube-public -o yaml | \
- /bin/sed "s/\(certificate-authority-data:\).*/\1 ${base64_encoded_ca}" | \
+ /bin/sed "s/\(certificate-authority-data:\).*/\1 ${base64_encoded_ca}/" | \
kubectl apply -f -
```
diff --git a/content/en/docs/tutorials/_index.md b/content/en/docs/tutorials/_index.md
index b4f0709a7698b..630c04f5f6cae 100644
--- a/content/en/docs/tutorials/_index.md
+++ b/content/en/docs/tutorials/_index.md
@@ -33,7 +33,7 @@ Before walking through each tutorial, you may want to bookmark the
* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
-* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/)
+* [Example: Deploying PHP Guestbook application with MongoDB](/docs/tutorials/stateless-application/guestbook/)
## Stateful Applications
diff --git a/content/en/docs/tutorials/clusters/apparmor.md b/content/en/docs/tutorials/clusters/apparmor.md
index 8ca9f30bad47e..54c8a0f44c9db 100644
--- a/content/en/docs/tutorials/clusters/apparmor.md
+++ b/content/en/docs/tutorials/clusters/apparmor.md
@@ -168,8 +168,7 @@ k8s-apparmor-example-deny-write (enforce)
*This example assumes you have already set up a cluster with AppArmor support.*
-First, we need to load the profile we want to use onto our nodes. The profile we'll use simply
-denies all file writes:
+First, we need to load the profile we want to use onto our nodes. This profile denies all file writes:
```shell
#include
diff --git a/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html
index c610b6e9f4db2..1d8a069984aae 100644
--- a/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html
+++ b/content/en/docs/tutorials/kubernetes-basics/expose/expose-intro.html
@@ -63,13 +63,7 @@ Summary
Services and Labels
-
-
-
-
-
-
-
+
A Service routes traffic across a set of Pods. Services are the abstraction that allow pods to die and replicate in Kubernetes without impacting your application. Discovery and routing among dependent Pods (such as the frontend and backend components in an application) is handled by Kubernetes Services.
diff --git a/content/en/docs/tutorials/kubernetes-basics/public/images/module_04_labels.svg b/content/en/docs/tutorials/kubernetes-basics/public/images/module_04_labels.svg
index 31cd8638a1d09..781bfa0888e5a 100644
--- a/content/en/docs/tutorials/kubernetes-basics/public/images/module_04_labels.svg
+++ b/content/en/docs/tutorials/kubernetes-basics/public/images/module_04_labels.svg
@@ -1,710 +1,1054 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Docker
-
- Kubelt
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
diff --git a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md
index f43af0e15e501..b8aaaabb5b8ee 100644
--- a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md
+++ b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md
@@ -934,10 +934,10 @@ web-2 0/1 Terminating 0 3m
When the `web` StatefulSet was recreated, it first relaunched `web-0`.
Since `web-1` was already Running and Ready, when `web-0` transitioned to
- Running and Ready, it simply adopted this Pod. Since you recreated the StatefulSet
- with `replicas` equal to 2, once `web-0` had been recreated, and once
- `web-1` had been determined to already be Running and Ready, `web-2` was
- terminated.
+Running and Ready, it adopted this Pod. Since you recreated the StatefulSet
+with `replicas` equal to 2, once `web-0` had been recreated, and once
+`web-1` had been determined to already be Running and Ready, `web-2` was
+terminated.
Let's take another look at the contents of the `index.html` file served by the
Pods' webservers:
@@ -945,6 +945,7 @@ Pods' webservers:
```shell
for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
```
+
```
web-0
web-1
@@ -970,15 +971,18 @@ In another terminal, delete the StatefulSet again. This time, omit the
```shell
kubectl delete statefulset web
```
+
```
statefulset.apps "web" deleted
```
+
Examine the output of the `kubectl get` command running in the first terminal,
and wait for all of the Pods to transition to Terminating.
```shell
kubectl get pods -w -l app=nginx
```
+
```
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 11m
@@ -1006,10 +1010,10 @@ the cascade does not delete the headless Service associated with the StatefulSet
You must delete the `nginx` Service manually.
{{< /note >}}
-
```shell
kubectl delete service nginx
```
+
```
service "nginx" deleted
```
@@ -1019,6 +1023,7 @@ Recreate the StatefulSet and headless Service one more time:
```shell
kubectl apply -f web.yaml
```
+
```
service/nginx created
statefulset.apps/web created
@@ -1030,6 +1035,7 @@ the contents of their `index.html` files:
```shell
for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
```
+
```
web-0
web-1
@@ -1044,13 +1050,17 @@ Finally, delete the `nginx` Service...
```shell
kubectl delete service nginx
```
+
```
service "nginx" deleted
```
+
...and the `web` StatefulSet:
+
```shell
kubectl delete statefulset web
```
+
```
statefulset "web" deleted
```
diff --git a/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md b/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md
deleted file mode 100644
index d3a38c4df5b34..0000000000000
--- a/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md
+++ /dev/null
@@ -1,460 +0,0 @@
----
-title: "Example: Add logging and metrics to the PHP / Redis Guestbook example"
-reviewers:
-- sftim
-content_type: tutorial
-weight: 21
-card:
- name: tutorials
- weight: 31
- title: "Example: Add logging and metrics to the PHP / Redis Guestbook example"
----
-
-
-This tutorial builds upon the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. Lightweight log, metric, and network data open source shippers, or *Beats*, from Elastic are deployed in the same Kubernetes cluster as the guestbook. The Beats collect, parse, and index the data into Elasticsearch so that you can view and analyze the resulting operational information in Kibana. This example consists of the following components:
-
-* A running instance of the [PHP Guestbook with Redis tutorial](/docs/tutorials/stateless-application/guestbook)
-* Elasticsearch and Kibana
-* Filebeat
-* Metricbeat
-* Packetbeat
-
-## {{% heading "objectives" %}}
-
-* Start up the PHP Guestbook with Redis.
-* Install kube-state-metrics.
-* Create a Kubernetes Secret.
-* Deploy the Beats.
-* View dashboards of your logs and metrics.
-
-## {{% heading "prerequisites" %}}
-
-
-{{< include "task-tutorial-prereqs.md" >}}
-{{< version-check >}}
-
-Additionally you need:
-
-* A running deployment of the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial.
-
-* A running Elasticsearch and Kibana deployment. You can use [Elasticsearch Service in Elastic Cloud](https://cloud.elastic.co),
- run the [downloaded files](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-elastic-stack.html)
- on your workstation or servers, or the [Elastic Helm Charts](https://github.com/elastic/helm-charts).
-
-
-
-## Start up the PHP Guestbook with Redis
-
-This tutorial builds on the [PHP Guestbook with Redis](/docs/tutorials/stateless-application/guestbook) tutorial. If you have the guestbook application running, then you can monitor that. If you do not have it running then follow the instructions to deploy the guestbook and do not perform the **Cleanup** steps. Come back to this page when you have the guestbook running.
-
-## Add a Cluster role binding
-
-Create a [cluster level role binding](/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) so that you can deploy kube-state-metrics and the Beats at the cluster level (in kube-system).
-
-```shell
-kubectl create clusterrolebinding cluster-admin-binding \
- --clusterrole=cluster-admin --user=
-```
-
-## Install kube-state-metrics
-
-Kubernetes [*kube-state-metrics*](https://github.com/kubernetes/kube-state-metrics) is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. Metricbeat reports these metrics. Add kube-state-metrics to the Kubernetes cluster that the guestbook is running in.
-
-```shell
-git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics
-kubectl apply -f kube-state-metrics/examples/standard
-```
-
-### Check to see if kube-state-metrics is running
-
-```shell
-kubectl get pods --namespace=kube-system -l app.kubernetes.io/name=kube-state-metrics
-```
-
-Output:
-
-```
-NAME READY STATUS RESTARTS AGE
-kube-state-metrics-89d656bf8-vdthm 1/1 Running 0 21s
-```
-
-## Clone the Elastic examples GitHub repo
-
-```shell
-git clone https://github.com/elastic/examples.git
-```
-
-The rest of the commands will reference files in the `examples/beats-k8s-send-anywhere` directory, so change dir there:
-
-```shell
-cd examples/beats-k8s-send-anywhere
-```
-
-## Create a Kubernetes Secret
-
-A Kubernetes {{< glossary_tooltip text="Secret" term_id="secret" >}} is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in an image; putting it in a Secret object allows for more control over how it is used, and reduces the risk of accidental exposure.
-
-{{< note >}}
-There are two sets of steps here, one for *self managed* Elasticsearch and Kibana (running on your servers or using the Elastic Helm Charts), and a second separate set for the *managed service* Elasticsearch Service in Elastic Cloud. Only create the secret for the type of Elasticsearch and Kibana system that you will use for this tutorial.
-{{< /note >}}
-
-{{< tabs name="tab_with_md" >}}
-{{% tab name="Self Managed" %}}
-
-### Self managed
-
-Switch to the **Managed service** tab if you are connecting to Elasticsearch Service in Elastic Cloud.
-
-### Set the credentials
-
-There are four files to edit to create a k8s secret when you are connecting to self managed Elasticsearch and Kibana (self managed is effectively anything other than the managed Elasticsearch Service in Elastic Cloud). The files are:
-
-1. `ELASTICSEARCH_HOSTS`
-1. `ELASTICSEARCH_PASSWORD`
-1. `ELASTICSEARCH_USERNAME`
-1. `KIBANA_HOST`
-
-Set these with the information for your Elasticsearch cluster and your Kibana host. Here are some examples (also see [*this configuration*](https://stackoverflow.com/questions/59892896/how-to-connect-from-minikube-to-elasticsearch-installed-on-host-local-developme/59892897#59892897))
-
-#### `ELASTICSEARCH_HOSTS`
-
-1. A nodeGroup from the Elastic Elasticsearch Helm Chart:
-
- ```
- ["http://elasticsearch-master.default.svc.cluster.local:9200"]
- ```
-
-1. A single Elasticsearch node running on a Mac where your Beats are running in Docker for Mac:
-
- ```
- ["http://host.docker.internal:9200"]
- ```
-
-1. Two Elasticsearch nodes running in VMs or on physical hardware:
-
- ```
- ["http://host1.example.com:9200", "http://host2.example.com:9200"]
- ```
-
-Edit `ELASTICSEARCH_HOSTS`:
-
-```shell
-vi ELASTICSEARCH_HOSTS
-```
-
-#### `ELASTICSEARCH_PASSWORD`
-
-Just the password; no whitespace, quotes, `<` or `>`:
-
-```
-
-```
-
-Edit `ELASTICSEARCH_PASSWORD`:
-
-```shell
-vi ELASTICSEARCH_PASSWORD
-```
-
-#### `ELASTICSEARCH_USERNAME`
-
-Just the username; no whitespace, quotes, `<` or `>`:
-
-```
-
-```
-
-Edit `ELASTICSEARCH_USERNAME`:
-
-```shell
-vi ELASTICSEARCH_USERNAME
-```
-
-#### `KIBANA_HOST`
-
-1. The Kibana instance from the Elastic Kibana Helm Chart. The subdomain `default` refers to the default namespace. If you have deployed the Helm Chart using a different namespace, then your subdomain will be different:
-
- ```
- "kibana-kibana.default.svc.cluster.local:5601"
- ```
-
-1. A Kibana instance running on a Mac where your Beats are running in Docker for Mac:
-
- ```
- "host.docker.internal:5601"
- ```
-1. Two Elasticsearch nodes running in VMs or on physical hardware:
-
- ```
- "host1.example.com:5601"
- ```
-
-Edit `KIBANA_HOST`:
-
-```shell
-vi KIBANA_HOST
-```
-
-### Create a Kubernetes Secret
-
-This command creates a Secret in the Kubernetes system level namespace (`kube-system`) based on the files you just edited:
-
-```shell
-kubectl create secret generic dynamic-logging \
- --from-file=./ELASTICSEARCH_HOSTS \
- --from-file=./ELASTICSEARCH_PASSWORD \
- --from-file=./ELASTICSEARCH_USERNAME \
- --from-file=./KIBANA_HOST \
- --namespace=kube-system
-```
-
-{{% /tab %}}
-{{% tab name="Managed service" %}}
-
-## Managed service
-
-This tab is for Elasticsearch Service in Elastic Cloud only, if you have already created a secret for a self managed Elasticsearch and Kibana deployment, then continue with [Deploy the Beats](#deploy-the-beats).
-
-### Set the credentials
-
-There are two files to edit to create a Kubernetes Secret when you are connecting to the managed Elasticsearch Service in Elastic Cloud. The files are:
-
-1. `ELASTIC_CLOUD_AUTH`
-1. `ELASTIC_CLOUD_ID`
-
-Set these with the information provided to you from the Elasticsearch Service console when you created the deployment. Here are some examples:
-
-#### `ELASTIC_CLOUD_ID`
-
-```
-devk8s:ABC123def456ghi789jkl123mno456pqr789stu123vwx456yza789bcd012efg345hijj678klm901nop345zEwOTJjMTc5YWQ0YzQ5OThlN2U5MjAwYTg4NTIzZQ==
-```
-
-#### `ELASTIC_CLOUD_AUTH`
-
-Just the username, a colon (`:`), and the password, no whitespace or quotes:
-
-```
-elastic:VFxJJf9Tjwer90wnfTghsn8w
-```
-
-### Edit the required files:
-
-```shell
-vi ELASTIC_CLOUD_ID
-vi ELASTIC_CLOUD_AUTH
-```
-
-### Create a Kubernetes Secret
-
-This command creates a Secret in the Kubernetes system level namespace (`kube-system`) based on the files you just edited:
-
-```shell
-kubectl create secret generic dynamic-logging \
- --from-file=./ELASTIC_CLOUD_ID \
- --from-file=./ELASTIC_CLOUD_AUTH \
- --namespace=kube-system
-```
-
-{{% /tab %}}
-
-{{< /tabs >}}
-
-## Deploy the Beats
-
-Manifest files are provided for each Beat. These manifest files use the secret created earlier to configure the Beats to connect to your Elasticsearch and Kibana servers.
-
-### About Filebeat
-
-Filebeat will collect logs from the Kubernetes nodes and the containers running in each pod running on those nodes. Filebeat is deployed as a {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}. Filebeat can autodiscover applications running in your Kubernetes cluster. At startup Filebeat scans existing containers and launches the proper configurations for them, then it will watch for new start/stop events.
-
-Here is the autodiscover configuration that enables Filebeat to locate and parse Redis logs from the Redis containers deployed with the guestbook application. This configuration is in the file `filebeat-kubernetes.yaml`:
-
-```yaml
-- condition.contains:
- kubernetes.labels.app: redis
- config:
- - module: redis
- log:
- input:
- type: docker
- containers.ids:
- - ${data.kubernetes.container.id}
- slowlog:
- enabled: true
- var.hosts: ["${data.host}:${data.port}"]
-```
-
-This configures Filebeat to apply the Filebeat module `redis` when a container is detected with a label `app` containing the string `redis`. The redis module has the ability to collect the `log` stream from the container by using the docker input type (reading the file on the Kubernetes node associated with the STDOUT stream from this Redis container). Additionally, the module has the ability to collect Redis `slowlog` entries by connecting to the proper pod host and port, which is provided in the container metadata.
-
-### Deploy Filebeat:
-
-```shell
-kubectl create -f filebeat-kubernetes.yaml
-```
-
-#### Verify
-
-```shell
-kubectl get pods -n kube-system -l k8s-app=filebeat-dynamic
-```
-
-### About Metricbeat
-
-Metricbeat autodiscover is configured in the same way as Filebeat. Here is the Metricbeat autodiscover configuration for the Redis containers. This configuration is in the file `metricbeat-kubernetes.yaml`:
-
-```yaml
-- condition.equals:
- kubernetes.labels.tier: backend
- config:
- - module: redis
- metricsets: ["info", "keyspace"]
- period: 10s
-
- # Redis hosts
- hosts: ["${data.host}:${data.port}"]
-```
-
-This configures Metricbeat to apply the Metricbeat module `redis` when a container is detected with a label `tier` equal to the string `backend`. The `redis` module has the ability to collect the `info` and `keyspace` metrics from the container by connecting to the proper pod host and port, which is provided in the container metadata.
-
-### Deploy Metricbeat
-
-```shell
-kubectl create -f metricbeat-kubernetes.yaml
-```
-
-#### Verify
-
-```shell
-kubectl get pods -n kube-system -l k8s-app=metricbeat
-```
-
-### About Packetbeat
-
-Packetbeat configuration is different than Filebeat and Metricbeat. Rather than specify patterns to match against container labels the configuration is based on the protocols and port numbers involved. Shown below is a subset of the port numbers.
-
-{{< note >}}
-If you are running a service on a non-standard port add that port number to the appropriate type in `filebeat.yaml` and delete/create the Packetbeat DaemonSet.
-{{< /note >}}
-
-```yaml
-packetbeat.interfaces.device: any
-
-packetbeat.protocols:
-- type: dns
- ports: [53]
- include_authorities: true
- include_additionals: true
-
-- type: http
- ports: [80, 8000, 8080, 9200]
-
-- type: mysql
- ports: [3306]
-
-- type: redis
- ports: [6379]
-
-packetbeat.flows:
- timeout: 30s
- period: 10s
-```
-
-#### Deploy Packetbeat
-
-```shell
-kubectl create -f packetbeat-kubernetes.yaml
-```
-
-#### Verify
-
-```shell
-kubectl get pods -n kube-system -l k8s-app=packetbeat-dynamic
-```
-
-## View in Kibana
-
-Open Kibana in your browser and then open the **Dashboard** application. In the search bar type Kubernetes and click on the Metricbeat dashboard for Kubernetes. This dashboard reports on the state of your Nodes, deployments, etc.
-
-Search for Packetbeat on the Dashboard page, and view the Packetbeat overview.
-
-Similarly, view dashboards for Apache and Redis. You will see dashboards for logs and metrics for each. The Apache Metricbeat dashboard will be blank. Look at the Apache Filebeat dashboard and scroll to the bottom to view the Apache error logs. This will tell you why there are no metrics available for Apache.
-
-To enable Metricbeat to retrieve the Apache metrics, enable server-status by adding a ConfigMap including a mod-status configuration file and re-deploy the guestbook.
-
-## Scale your Deployments and see new pods being monitored
-
-List the existing Deployments:
-
-```shell
-kubectl get deployments
-```
-
-The output:
-
-```
-NAME READY UP-TO-DATE AVAILABLE AGE
-frontend 3/3 3 3 3h27m
-redis-master 1/1 1 1 3h27m
-redis-slave 2/2 2 2 3h27m
-```
-
-Scale the frontend down to two pods:
-
-```shell
-kubectl scale --replicas=2 deployment/frontend
-```
-
-The output:
-
-```
-deployment.extensions/frontend scaled
-```
-
-Scale the frontend back up to three pods:
-
-```shell
-kubectl scale --replicas=3 deployment/frontend
-```
-
-## View the changes in Kibana
-
-See the screenshot, add the indicated filters and then add the columns to the view. You can see the ScalingReplicaSet entry that is marked, following from there to the top of the list of events shows the image being pulled, the volumes mounted, the pod starting, etc.
-
-
-## {{% heading "cleanup" %}}
-
-Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.
-
-1. Run the following commands to delete all Pods, Deployments, and Services.
-
- ```shell
- kubectl delete deployment -l app=redis
- kubectl delete service -l app=redis
- kubectl delete deployment -l app=guestbook
- kubectl delete service -l app=guestbook
- kubectl delete -f filebeat-kubernetes.yaml
- kubectl delete -f metricbeat-kubernetes.yaml
- kubectl delete -f packetbeat-kubernetes.yaml
- kubectl delete secret dynamic-logging -n kube-system
- ```
-
-1. Query the list of Pods to verify that no Pods are running:
-
- ```shell
- kubectl get pods
- ```
-
- The response should be this:
-
- ```
- No resources found.
- ```
-
-## {{% heading "whatsnext" %}}
-
-* Learn about [tools for monitoring resources](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
-* Read more about [logging architecture](/docs/concepts/cluster-administration/logging/)
-* Read more about [application introspection and debugging](/docs/tasks/debug-application-cluster/)
-* Read more about [troubleshoot applications](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
-
diff --git a/content/en/docs/tutorials/stateless-application/guestbook.md b/content/en/docs/tutorials/stateless-application/guestbook.md
index c591e09cea8f8..b60afcf5f8646 100644
--- a/content/en/docs/tutorials/stateless-application/guestbook.md
+++ b/content/en/docs/tutorials/stateless-application/guestbook.md
@@ -1,5 +1,5 @@
---
-title: "Example: Deploying PHP Guestbook application with Redis"
+title: "Example: Deploying PHP Guestbook application with MongoDB"
reviewers:
- ahmetb
content_type: tutorial
@@ -7,22 +7,19 @@ weight: 20
card:
name: tutorials
weight: 30
- title: "Stateless Example: PHP Guestbook with Redis"
+ title: "Stateless Example: PHP Guestbook with MongoDB"
+min-kubernetes-server-version: v1.14
---
-This tutorial shows you how to build and deploy a simple, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/). This example consists of the following components:
+This tutorial shows you how to build and deploy a simple _(not production ready)_, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/). This example consists of the following components:
-* A single-instance [Redis](https://redis.io/) master to store guestbook entries
-* Multiple [replicated Redis](https://redis.io/topics/replication) instances to serve reads
+* A single-instance [MongoDB](https://www.mongodb.com/) to store guestbook entries
* Multiple web frontend instances
-
-
## {{% heading "objectives" %}}
-* Start up a Redis master.
-* Start up Redis slaves.
+* Start up a Mongo database.
* Start up the guestbook frontend.
* Expose and view the Frontend Service.
* Clean up.
@@ -39,24 +36,28 @@ This tutorial shows you how to build and deploy a simple, multi-tier web applica
-## Start up the Redis Master
+## Start up the Mongo Database
-The guestbook application uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis slave instances.
+The guestbook application uses MongoDB to store its data.
-### Creating the Redis Master Deployment
+### Creating the Mongo Deployment
-The manifest file, included below, specifies a Deployment controller that runs a single replica Redis master Pod.
+The manifest file, included below, specifies a Deployment controller that runs a single replica MongoDB Pod.
-{{< codenew file="application/guestbook/redis-master-deployment.yaml" >}}
+{{< codenew file="application/guestbook/mongo-deployment.yaml" >}}
1. Launch a terminal window in the directory you downloaded the manifest files.
-1. Apply the Redis Master Deployment from the `redis-master-deployment.yaml` file:
+1. Apply the MongoDB Deployment from the `mongo-deployment.yaml` file:
```shell
- kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml
+ kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml
```
+
-1. Query the list of Pods to verify that the Redis Master Pod is running:
+1. Query the list of Pods to verify that the MongoDB Pod is running:
```shell
kubectl get pods
@@ -66,32 +67,33 @@ The manifest file, included below, specifies a Deployment controller that runs a
```shell
NAME READY STATUS RESTARTS AGE
- redis-master-1068406935-3lswp 1/1 Running 0 28s
+ mongo-5cfd459dd4-lrcjb 1/1 Running 0 28s
```
-1. Run the following command to view the logs from the Redis Master Pod:
+1. Run the following command to view the logs from the MongoDB Deployment:
```shell
- kubectl logs -f POD-NAME
+ kubectl logs -f deployment/mongo
```
-{{< note >}}
-Replace POD-NAME with the name of your Pod.
-{{< /note >}}
-
-### Creating the Redis Master Service
+### Creating the MongoDB Service
-The guestbook application needs to communicate to the Redis master to write its data. You need to apply a [Service](/docs/concepts/services-networking/service/) to proxy the traffic to the Redis master Pod. A Service defines a policy to access the Pods.
+The guestbook application needs to communicate to the MongoDB to write its data. You need to apply a [Service](/docs/concepts/services-networking/service/) to proxy the traffic to the MongoDB Pod. A Service defines a policy to access the Pods.
-{{< codenew file="application/guestbook/redis-master-service.yaml" >}}
+{{< codenew file="application/guestbook/mongo-service.yaml" >}}
-1. Apply the Redis Master Service from the following `redis-master-service.yaml` file:
+1. Apply the MongoDB Service from the following `mongo-service.yaml` file:
```shell
- kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml
+ kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml
```
-1. Query the list of Services to verify that the Redis Master Service is running:
+
+
+1. Query the list of Services to verify that the MongoDB Service is running:
```shell
kubectl get service
@@ -102,77 +104,17 @@ The guestbook application needs to communicate to the Redis master to write its
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 443/TCP 1m
- redis-master ClusterIP 10.0.0.151 6379/TCP 8s
+ mongo ClusterIP 10.0.0.151 6379/TCP 8s
```
{{< note >}}
-This manifest file creates a Service named `redis-master` with a set of labels that match the labels previously defined, so the Service routes network traffic to the Redis master Pod.
+This manifest file creates a Service named `mongo` with a set of labels that match the labels previously defined, so the Service routes network traffic to the MongoDB Pod.
{{< /note >}}
-## Start up the Redis Slaves
-
-Although the Redis master is a single pod, you can make it highly available to meet traffic demands by adding replica Redis slaves.
-
-### Creating the Redis Slave Deployment
-
-Deployments scale based off of the configurations set in the manifest file. In this case, the Deployment object specifies two replicas.
-
-If there are not any replicas running, this Deployment would start the two replicas on your container cluster. Conversely, if there are more than two replicas running, it would scale down until two replicas are running.
-
-{{< codenew file="application/guestbook/redis-slave-deployment.yaml" >}}
-
-1. Apply the Redis Slave Deployment from the `redis-slave-deployment.yaml` file:
-
- ```shell
- kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-deployment.yaml
- ```
-
-1. Query the list of Pods to verify that the Redis Slave Pods are running:
-
- ```shell
- kubectl get pods
- ```
-
- The response should be similar to this:
-
- ```shell
- NAME READY STATUS RESTARTS AGE
- redis-master-1068406935-3lswp 1/1 Running 0 1m
- redis-slave-2005841000-fpvqc 0/1 ContainerCreating 0 6s
- redis-slave-2005841000-phfv9 0/1 ContainerCreating 0 6s
- ```
-
-### Creating the Redis Slave Service
-
-The guestbook application needs to communicate to Redis slaves to read data. To make the Redis slaves discoverable, you need to set up a Service. A Service provides transparent load balancing to a set of Pods.
-
-{{< codenew file="application/guestbook/redis-slave-service.yaml" >}}
-
-1. Apply the Redis Slave Service from the following `redis-slave-service.yaml` file:
-
- ```shell
- kubectl apply -f https://k8s.io/examples/application/guestbook/redis-slave-service.yaml
- ```
-
-1. Query the list of Services to verify that the Redis slave service is running:
-
- ```shell
- kubectl get services
- ```
-
- The response should be similar to this:
-
- ```
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- kubernetes ClusterIP 10.0.0.1 443/TCP 2m
- redis-master ClusterIP 10.0.0.151 6379/TCP 1m
- redis-slave ClusterIP 10.0.0.223 6379/TCP 6s
- ```
-
## Set up and Expose the Guestbook Frontend
-The guestbook application has a web frontend serving the HTTP requests written in PHP. It is configured to connect to the `redis-master` Service for write requests and the `redis-slave` service for Read requests.
+The guestbook application has a web frontend serving the HTTP requests written in PHP. It is configured to connect to the `mongo` Service to store Guestbook entries.
### Creating the Guestbook Frontend Deployment
@@ -184,6 +126,11 @@ The guestbook application has a web frontend serving the HTTP requests written i
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml
```
+
+
1. Query the list of Pods to verify that the three frontend replicas are running:
```shell
@@ -201,12 +148,12 @@ The guestbook application has a web frontend serving the HTTP requests written i
### Creating the Frontend Service
-The `redis-slave` and `redis-master` Services you applied are only accessible within the container cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services---service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
+The `mongo` Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is [ClusterIP](/docs/concepts/services-networking/service/#publishing-services---service-types). `ClusterIP` provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
-If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the container cluster. Minikube can only expose Services through `NodePort`.
+If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the Kubernetes cluster. However a Kubernetes user you can use `kubectl port-forward` to access the service even though it uses a `ClusterIP`.
{{< note >}}
-Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, simply delete or comment out `type: NodePort`, and uncomment `type: LoadBalancer`.
+Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, uncomment `type: LoadBalancer`.
{{< /note >}}
{{< codenew file="application/guestbook/frontend-service.yaml" >}}
@@ -217,6 +164,11 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml
```
+
+
1. Query the list of Services to verify that the frontend Service is running:
```shell
@@ -227,29 +179,27 @@ Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, su
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- frontend NodePort 10.0.0.112 80:31323/TCP 6s
+ frontend ClusterIP 10.0.0.112 80/TCP 6s
kubernetes ClusterIP 10.0.0.1 443/TCP 4m
- redis-master ClusterIP 10.0.0.151 6379/TCP 2m
- redis-slave ClusterIP 10.0.0.223 6379/TCP 1m
+ mongo ClusterIP 10.0.0.151 6379/TCP 2m
```
-### Viewing the Frontend Service via `NodePort`
-
-If you deployed this application to Minikube or a local cluster, you need to find the IP address to view your Guestbook.
+### Viewing the Frontend Service via `kubectl port-forward`
-1. Run the following command to get the IP address for the frontend Service.
+1. Run the following command to forward port `8080` on your local machine to port `80` on the service.
```shell
- minikube service frontend --url
+ kubectl port-forward svc/frontend 8080:80
```
The response should be similar to this:
```
- http://192.168.99.100:31323
+ Forwarding from 127.0.0.1:8080 -> 80
+ Forwarding from [::1]:8080 -> 80
```
-1. Copy the IP address, and load the page in your browser to view your guestbook.
+1. load the page [http://localhost:8080](http://localhost:8080) in your browser to view your guestbook.
### Viewing the Frontend Service via `LoadBalancer`
@@ -295,9 +245,7 @@ You can scale up or down as needed because your servers are defined as a Service
frontend-3823415956-k22zn 1/1 Running 0 54m
frontend-3823415956-w9gbt 1/1 Running 0 54m
frontend-3823415956-x2pld 1/1 Running 0 5s
- redis-master-1068406935-3lswp 1/1 Running 0 56m
- redis-slave-2005841000-fpvqc 1/1 Running 0 55m
- redis-slave-2005841000-phfv9 1/1 Running 0 55m
+ mongo-1068406935-3lswp 1/1 Running 0 56m
```
1. Run the following command to scale down the number of frontend Pods:
@@ -318,9 +266,7 @@ You can scale up or down as needed because your servers are defined as a Service
NAME READY STATUS RESTARTS AGE
frontend-3823415956-k22zn 1/1 Running 0 1h
frontend-3823415956-w9gbt 1/1 Running 0 1h
- redis-master-1068406935-3lswp 1/1 Running 0 1h
- redis-slave-2005841000-fpvqc 1/1 Running 0 1h
- redis-slave-2005841000-phfv9 1/1 Running 0 1h
+ mongo-1068406935-3lswp 1/1 Running 0 1h
```
@@ -332,20 +278,18 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
1. Run the following commands to delete all Pods, Deployments, and Services.
```shell
- kubectl delete deployment -l app=redis
- kubectl delete service -l app=redis
- kubectl delete deployment -l app=guestbook
- kubectl delete service -l app=guestbook
+ kubectl delete deployment -l app.kubernetes.io/name=mongo
+ kubectl delete service -l app.kubernetes.io/name=mongo
+ kubectl delete deployment -l app.kubernetes.io/name=guestbook
+ kubectl delete service -l app.kubernetes.io/name=guestbook
```
The responses should be:
```
- deployment.apps "redis-master" deleted
- deployment.apps "redis-slave" deleted
- service "redis-master" deleted
- service "redis-slave" deleted
- deployment.apps "frontend" deleted
+ deployment.apps "mongo" deleted
+ service "mongo" deleted
+ deployment.apps "frontend" deleted
service "frontend" deleted
```
@@ -365,7 +309,6 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
## {{% heading "whatsnext" %}}
-* Add [ELK logging and monitoring](/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk/) to your Guestbook application
* Complete the [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) Interactive Tutorials
* Use Kubernetes to create a blog using [Persistent Volumes for MySQL and Wordpress](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-wordpress-blog)
* Read more about [connecting applications](/docs/concepts/services-networking/connect-applications-service/)
diff --git a/content/en/examples/application/guestbook/frontend-deployment.yaml b/content/en/examples/application/guestbook/frontend-deployment.yaml
index 23d64be6442cc..613c654aa97b3 100644
--- a/content/en/examples/application/guestbook/frontend-deployment.yaml
+++ b/content/en/examples/application/guestbook/frontend-deployment.yaml
@@ -3,22 +3,24 @@ kind: Deployment
metadata:
name: frontend
labels:
- app: guestbook
+ app.kubernetes.io/name: guestbook
+ app.kubernetes.io/component: frontend
spec:
selector:
matchLabels:
- app: guestbook
- tier: frontend
+ app.kubernetes.io/name: guestbook
+ app.kubernetes.io/component: frontend
replicas: 3
template:
metadata:
labels:
- app: guestbook
- tier: frontend
+ app.kubernetes.io/name: guestbook
+ app.kubernetes.io/component: frontend
spec:
containers:
- - name: php-redis
- image: gcr.io/google-samples/gb-frontend:v4
+ - name: guestbook
+ image: paulczar/gb-frontend:v5
+ # image: gcr.io/google-samples/gb-frontend:v4
resources:
requests:
cpu: 100m
@@ -26,13 +28,5 @@ spec:
env:
- name: GET_HOSTS_FROM
value: dns
- # Using `GET_HOSTS_FROM=dns` requires your cluster to
- # provide a dns service. As of Kubernetes 1.3, DNS is a built-in
- # service launched automatically. However, if the cluster you are using
- # does not have a built-in DNS service, you can instead
- # access an environment variable to find the master
- # service's host. To do so, comment out the 'value: dns' line above, and
- # uncomment the line below:
- # value: env
ports:
- containerPort: 80
diff --git a/content/en/examples/application/guestbook/frontend-service.yaml b/content/en/examples/application/guestbook/frontend-service.yaml
index 6f283f347b93f..34ad3771d755f 100644
--- a/content/en/examples/application/guestbook/frontend-service.yaml
+++ b/content/en/examples/application/guestbook/frontend-service.yaml
@@ -3,16 +3,14 @@ kind: Service
metadata:
name: frontend
labels:
- app: guestbook
- tier: frontend
+ app.kubernetes.io/name: guestbook
+ app.kubernetes.io/component: frontend
spec:
- # comment or delete the following line if you want to use a LoadBalancer
- type: NodePort
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
ports:
- port: 80
selector:
- app: guestbook
- tier: frontend
+ app.kubernetes.io/name: guestbook
+ app.kubernetes.io/component: frontend
diff --git a/content/en/examples/application/guestbook/mongo-deployment.yaml b/content/en/examples/application/guestbook/mongo-deployment.yaml
new file mode 100644
index 0000000000000..04908ce25b1dc
--- /dev/null
+++ b/content/en/examples/application/guestbook/mongo-deployment.yaml
@@ -0,0 +1,31 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: mongo
+ labels:
+ app.kubernetes.io/name: mongo
+ app.kubernetes.io/component: backend
+spec:
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: mongo
+ app.kubernetes.io/component: backend
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: mongo
+ app.kubernetes.io/component: backend
+ spec:
+ containers:
+ - name: mongo
+ image: mongo:4.2
+ args:
+ - --bind_ip
+ - 0.0.0.0
+ resources:
+ requests:
+ cpu: 100m
+ memory: 100Mi
+ ports:
+ - containerPort: 27017
diff --git a/content/en/examples/application/guestbook/mongo-service.yaml b/content/en/examples/application/guestbook/mongo-service.yaml
new file mode 100644
index 0000000000000..b9cef607bcf79
--- /dev/null
+++ b/content/en/examples/application/guestbook/mongo-service.yaml
@@ -0,0 +1,14 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: mongo
+ labels:
+ app.kubernetes.io/name: mongo
+ app.kubernetes.io/component: backend
+spec:
+ ports:
+ - port: 27017
+ targetPort: 27017
+ selector:
+ app.kubernetes.io/name: mongo
+ app.kubernetes.io/component: backend
diff --git a/content/en/examples/application/guestbook/redis-master-deployment.yaml b/content/en/examples/application/guestbook/redis-master-deployment.yaml
deleted file mode 100644
index 478216d1accfa..0000000000000
--- a/content/en/examples/application/guestbook/redis-master-deployment.yaml
+++ /dev/null
@@ -1,29 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: redis-master
- labels:
- app: redis
-spec:
- selector:
- matchLabels:
- app: redis
- role: master
- tier: backend
- replicas: 1
- template:
- metadata:
- labels:
- app: redis
- role: master
- tier: backend
- spec:
- containers:
- - name: master
- image: k8s.gcr.io/redis:e2e # or just image: redis
- resources:
- requests:
- cpu: 100m
- memory: 100Mi
- ports:
- - containerPort: 6379
diff --git a/content/en/examples/application/guestbook/redis-master-service.yaml b/content/en/examples/application/guestbook/redis-master-service.yaml
deleted file mode 100644
index 65cef2191c493..0000000000000
--- a/content/en/examples/application/guestbook/redis-master-service.yaml
+++ /dev/null
@@ -1,17 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
- name: redis-master
- labels:
- app: redis
- role: master
- tier: backend
-spec:
- ports:
- - name: redis
- port: 6379
- targetPort: 6379
- selector:
- app: redis
- role: master
- tier: backend
diff --git a/content/en/examples/application/guestbook/redis-slave-deployment.yaml b/content/en/examples/application/guestbook/redis-slave-deployment.yaml
deleted file mode 100644
index 1a7b04386a4a5..0000000000000
--- a/content/en/examples/application/guestbook/redis-slave-deployment.yaml
+++ /dev/null
@@ -1,40 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: redis-slave
- labels:
- app: redis
-spec:
- selector:
- matchLabels:
- app: redis
- role: slave
- tier: backend
- replicas: 2
- template:
- metadata:
- labels:
- app: redis
- role: slave
- tier: backend
- spec:
- containers:
- - name: slave
- image: gcr.io/google_samples/gb-redisslave:v3
- resources:
- requests:
- cpu: 100m
- memory: 100Mi
- env:
- - name: GET_HOSTS_FROM
- value: dns
- # Using `GET_HOSTS_FROM=dns` requires your cluster to
- # provide a dns service. As of Kubernetes 1.3, DNS is a built-in
- # service launched automatically. However, if the cluster you are using
- # does not have a built-in DNS service, you can instead
- # access an environment variable to find the master
- # service's host. To do so, comment out the 'value: dns' line above, and
- # uncomment the line below:
- # value: env
- ports:
- - containerPort: 6379
diff --git a/content/en/examples/application/guestbook/redis-slave-service.yaml b/content/en/examples/application/guestbook/redis-slave-service.yaml
deleted file mode 100644
index 238fd63fb6a29..0000000000000
--- a/content/en/examples/application/guestbook/redis-slave-service.yaml
+++ /dev/null
@@ -1,15 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
- name: redis-slave
- labels:
- app: redis
- role: slave
- tier: backend
-spec:
- ports:
- - port: 6379
- selector:
- app: redis
- role: slave
- tier: backend
diff --git a/content/en/examples/examples_test.go b/content/en/examples/examples_test.go
index 012a2acaa79b4..982ddbd69353f 100644
--- a/content/en/examples/examples_test.go
+++ b/content/en/examples/examples_test.go
@@ -148,6 +148,11 @@ func getCodecForObject(obj runtime.Object) (runtime.Codec, error) {
}
func validateObject(obj runtime.Object) (errors field.ErrorList) {
+ podValidationOptions := validation.PodValidationOptions{
+ AllowMultipleHugePageResources: true,
+ AllowDownwardAPIHugePages: true,
+ }
+
// Enable CustomPodDNS for testing
// feature.DefaultFeatureGate.Set("CustomPodDNS=true")
switch t := obj.(type) {
@@ -182,7 +187,7 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
opts := validation.PodValidationOptions{
AllowMultipleHugePageResources: true,
}
- errors = validation.ValidatePod(t, opts)
+ errors = validation.ValidatePodCreate(t, opts)
case *api.PodList:
for i := range t.Items {
errors = append(errors, validateObject(&t.Items[i])...)
@@ -191,12 +196,12 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
- errors = validation.ValidatePodTemplate(t)
+ errors = validation.ValidatePodTemplate(t, podValidationOptions)
case *api.ReplicationController:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
- errors = validation.ValidateReplicationController(t)
+ errors = validation.ValidateReplicationController(t, podValidationOptions)
case *api.ReplicationControllerList:
for i := range t.Items {
errors = append(errors, validateObject(&t.Items[i])...)
@@ -215,7 +220,11 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
- errors = validation.ValidateService(t, true)
+ // handle clusterIPs, logic copied from service strategy
+ if len(t.Spec.ClusterIP) > 0 && len(t.Spec.ClusterIPs) == 0 {
+ t.Spec.ClusterIPs = []string{t.Spec.ClusterIP}
+ }
+ errors = validation.ValidateService(t)
case *api.ServiceAccount:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
@@ -250,12 +259,12 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
- errors = apps_validation.ValidateDaemonSet(t)
+ errors = apps_validation.ValidateDaemonSet(t, podValidationOptions)
case *apps.Deployment:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
- errors = apps_validation.ValidateDeployment(t)
+ errors = apps_validation.ValidateDeployment(t, podValidationOptions)
case *networking.Ingress:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
@@ -265,18 +274,30 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
Version: legacyscheme.Scheme.PrioritizedVersionsForGroup(networking.GroupName)[0].Version,
}
errors = networking_validation.ValidateIngressCreate(t, gv)
+ case *networking.IngressClass:
+ /*
+ if t.Namespace == "" {
+ t.Namespace = api.NamespaceDefault
+ }
+ gv := schema.GroupVersion{
+ Group: networking.GroupName,
+ Version: legacyscheme.Scheme.PrioritizedVersionsForGroup(networking.GroupName)[0].Version,
+ }
+ */
+ errors = networking_validation.ValidateIngressClass(t)
+
case *policy.PodSecurityPolicy:
errors = policy_validation.ValidatePodSecurityPolicy(t)
case *apps.ReplicaSet:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
- errors = apps_validation.ValidateReplicaSet(t)
+ errors = apps_validation.ValidateReplicaSet(t, podValidationOptions)
case *batch.CronJob:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
}
- errors = batch_validation.ValidateCronJob(t)
+ errors = batch_validation.ValidateCronJob(t, podValidationOptions)
case *networking.NetworkPolicy:
if t.Namespace == "" {
t.Namespace = api.NamespaceDefault
@@ -287,6 +308,9 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
t.Namespace = api.NamespaceDefault
}
errors = policy_validation.ValidatePodDisruptionBudget(t)
+ case *rbac.ClusterRole:
+ // clusterole does not accept namespace
+ errors = rbac_validation.ValidateClusterRole(t)
case *rbac.ClusterRoleBinding:
// clusterolebinding does not accept namespace
errors = rbac_validation.ValidateClusterRoleBinding(t)
@@ -414,6 +438,7 @@ func TestExampleObjectSchemas(t *testing.T) {
"storagelimits": {&api.LimitRange{}},
},
"admin/sched": {
+ "clusterrole": {&rbac.ClusterRole{}},
"my-scheduler": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &rbac.ClusterRoleBinding{}, &apps.Deployment{}},
"pod1": {&api.Pod{}},
"pod2": {&api.Pod{}},
@@ -539,6 +564,7 @@ func TestExampleObjectSchemas(t *testing.T) {
"dapi-envars-pod": {&api.Pod{}},
"dapi-volume": {&api.Pod{}},
"dapi-volume-resources": {&api.Pod{}},
+ "dependent-envars": {&api.Pod{}},
"envars": {&api.Pod{}},
"pod-multiple-secret-env-variable": {&api.Pod{}},
"pod-secret-envFrom": {&api.Pod{}},
@@ -596,29 +622,41 @@ func TestExampleObjectSchemas(t *testing.T) {
"load-balancer-example": {&apps.Deployment{}},
},
"service/access": {
- "frontend": {&api.Service{}, &apps.Deployment{}},
- "hello-application": {&apps.Deployment{}},
- "hello-service": {&api.Service{}},
- "hello": {&apps.Deployment{}},
+ "backend-deployment": {&apps.Deployment{}},
+ "backend-service": {&api.Service{}},
+ "frontend-deployment": {&apps.Deployment{}},
+ "frontend-service": {&api.Service{}},
+ "hello-application": {&apps.Deployment{}},
},
"service/networking": {
- "curlpod": {&apps.Deployment{}},
- "custom-dns": {&api.Pod{}},
- "dual-stack-default-svc": {&api.Service{}},
- "dual-stack-ipv4-svc": {&api.Service{}},
- "dual-stack-ipv6-lb-svc": {&api.Service{}},
- "dual-stack-ipv6-svc": {&api.Service{}},
- "hostaliases-pod": {&api.Pod{}},
- "ingress": {&networking.Ingress{}},
- "network-policy-allow-all-egress": {&networking.NetworkPolicy{}},
- "network-policy-allow-all-ingress": {&networking.NetworkPolicy{}},
- "network-policy-default-deny-egress": {&networking.NetworkPolicy{}},
- "network-policy-default-deny-ingress": {&networking.NetworkPolicy{}},
- "network-policy-default-deny-all": {&networking.NetworkPolicy{}},
- "nginx-policy": {&networking.NetworkPolicy{}},
- "nginx-secure-app": {&api.Service{}, &apps.Deployment{}},
- "nginx-svc": {&api.Service{}},
- "run-my-nginx": {&apps.Deployment{}},
+ "curlpod": {&apps.Deployment{}},
+ "custom-dns": {&api.Pod{}},
+ "dual-stack-default-svc": {&api.Service{}},
+ "dual-stack-ipfamilies-ipv6": {&api.Service{}},
+ "dual-stack-ipv6-svc": {&api.Service{}},
+ "dual-stack-prefer-ipv6-lb-svc": {&api.Service{}},
+ "dual-stack-preferred-ipfamilies-svc": {&api.Service{}},
+ "dual-stack-preferred-svc": {&api.Service{}},
+ "external-lb": {&networking.IngressClass{}},
+ "example-ingress": {&networking.Ingress{}},
+ "hostaliases-pod": {&api.Pod{}},
+ "ingress-resource-backend": {&networking.Ingress{}},
+ "ingress-wildcard-host": {&networking.Ingress{}},
+ "minimal-ingress": {&networking.Ingress{}},
+ "name-virtual-host-ingress": {&networking.Ingress{}},
+ "name-virtual-host-ingress-no-third-host": {&networking.Ingress{}},
+ "network-policy-allow-all-egress": {&networking.NetworkPolicy{}},
+ "network-policy-allow-all-ingress": {&networking.NetworkPolicy{}},
+ "network-policy-default-deny-egress": {&networking.NetworkPolicy{}},
+ "network-policy-default-deny-ingress": {&networking.NetworkPolicy{}},
+ "network-policy-default-deny-all": {&networking.NetworkPolicy{}},
+ "nginx-policy": {&networking.NetworkPolicy{}},
+ "nginx-secure-app": {&api.Service{}, &apps.Deployment{}},
+ "nginx-svc": {&api.Service{}},
+ "run-my-nginx": {&apps.Deployment{}},
+ "simple-fanout-example": {&networking.Ingress{}},
+ "test-ingress": {&networking.Ingress{}},
+ "tls-example-ingress": {&networking.Ingress{}},
},
"windows": {
"configmap-pod": {&api.ConfigMap{}, &api.Pod{}},
diff --git a/content/en/examples/policy/priority-class-resourcequota.yaml b/content/en/examples/policy/priority-class-resourcequota.yaml
new file mode 100644
index 0000000000000..7350d00c8f397
--- /dev/null
+++ b/content/en/examples/policy/priority-class-resourcequota.yaml
@@ -0,0 +1,10 @@
+apiVersion: v1
+kind: ResourceQuota
+metadata:
+ name: pods-cluster-services
+spec:
+ scopeSelector:
+ matchExpressions:
+ - operator : In
+ scopeName: PriorityClass
+ values: ["cluster-services"]
\ No newline at end of file
diff --git a/content/es/docs/concepts/configuration/secret.md b/content/es/docs/concepts/configuration/secret.md
index f1e68ee0f5db0..7120f0476bb9d 100644
--- a/content/es/docs/concepts/configuration/secret.md
+++ b/content/es/docs/concepts/configuration/secret.md
@@ -10,16 +10,13 @@ feature:
weight: 50
---
-
-{{% capture overview %}}
+
Los objetos de tipo {{< glossary_tooltip text="Secret" term_id="secret" >}} en Kubernetes te permiten almacenar y administrar información confidencial, como
contraseñas, tokens OAuth y llaves ssh. Poniendo esta información en un Secret
es más seguro y más flexible que ponerlo en la definición de un {{< glossary_tooltip term_id="pod" >}} o en un {{< glossary_tooltip text="container image" term_id="image" >}}. Ver [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) para más información.
-{{% /capture %}}
-
-{{% capture body %}}
+
## Introducción a Secrets
@@ -58,9 +55,11 @@ empaqueta esos archivos en un Secret y crea el objeto en el Apiserver.
```shell
kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
```
-```
+
+```none
Secret "db-user-pass" created
```
+
{{< note >}}
Si la contraseña que está utilizando tiene caracteres especiales como por ejemplo `$`, `\`, `*`, o `!`, es posible que sean interpretados por tu intérprete de comandos y es necesario escapar cada carácter utilizando `\` o introduciéndolos entre comillas simples `'`.
Por ejemplo, si tú password actual es `S!B\*d$zDsb`, deberías ejecutar el comando de esta manera:
@@ -76,14 +75,17 @@ Puedes comprobar que el Secret se haya creado, así:
```shell
kubectl get secrets
```
-```
+
+```none
NAME TYPE DATA AGE
db-user-pass Opaque 2 51s
```
+
```shell
kubectl describe secrets/db-user-pass
```
-```
+
+```none
Name: db-user-pass
Namespace: default
Labels:
@@ -137,7 +139,8 @@ Ahora escribe un Secret usando [`kubectl apply`](/docs/reference/generated/kubec
```shell
kubectl apply -f ./secret.yaml
```
-```
+
+```none
secret "mysecret" created
```
@@ -242,6 +245,7 @@ desde 1.14. Con esta nueva característica,
puedes tambien crear un Secret a partir de un generador y luego aplicarlo para crear el objeto en el Apiserver. Los generadores deben ser especificados en un `kustomization.yaml` dentro de un directorio.
Por ejemplo, para generar un Secret a partir de los archivos `./username.txt` y `./password.txt`
+
```shell
# Crear un fichero llamado kustomization.yaml con SecretGenerator
cat <./kustomization.yaml
@@ -281,9 +285,10 @@ username.txt: 5 bytes
Por ejemplo, para generar un Secret a partir de literales `username=admin` y `password=secret`,
puedes especificar el generador del Secret en `kustomization.yaml` como:
+
```shell
# Crea un fichero kustomization.yaml con SecretGenerator
-$ cat <./kustomization.yaml
+cat <./kustomization.yaml
secretGenerator:
- name: db-user-pass
literals:
@@ -291,11 +296,14 @@ secretGenerator:
- password=secret
EOF
```
+
Aplica el directorio kustomization para crear el objeto Secret.
+
```shell
-$ kubectl apply -k .
+kubectl apply -k .
secret/db-user-pass-dddghtt9b5 created
```
+
{{< note >}}
El nombre generado del Secret tiene un sufijo agregado al hashing de los contenidos. Esto asegura que se genera un nuevo Secret cada vez que el contenido es modificado.
{{< /note >}}
@@ -307,7 +315,8 @@ Los Secrets se pueden recuperar a través del comando `kubectl get secret` . Por
```shell
kubectl get secret mysecret -o yaml
```
-```
+
+```none
apiVersion: v1
kind: Secret
metadata:
@@ -328,7 +337,8 @@ Decodifica el campo de contraseña:
```shell
echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
```
-```
+
+```none
1f2d1e2e67df
```
@@ -480,7 +490,8 @@ Este es el resultado de comandos ejecutados dentro del contenedor del ejemplo an
```shell
ls /etc/foo/
```
-```
+
+```none
username
password
```
@@ -488,15 +499,16 @@ password
```shell
cat /etc/foo/username
```
-```
+
+```none
admin
```
-
```shell
cat /etc/foo/password
```
-```
+
+```none
1f2d1e2e67df
```
@@ -562,13 +574,16 @@ Este es el resultado de comandos ejecutados dentro del contenedor del ejemplo an
```shell
echo $SECRET_USERNAME
```
-```
+
+```none
admin
```
+
```shell
echo $SECRET_PASSWORD
```
-```
+
+```none
1f2d1e2e67df
```
@@ -641,7 +656,7 @@ Cree un fichero kustomization.yaml con SecretGenerator conteniendo algunas llave
kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
```
-```
+```none
secret "ssh-key-secret" created
```
@@ -649,7 +664,6 @@ secret "ssh-key-secret" created
Piense detenidamente antes de enviar tus propias llaves ssh: otros usuarios del cluster pueden tener acceso al Secret. Utilice una cuenta de servicio a la que desee que estén accesibles todos los usuarios con los que comparte el cluster de Kubernetes, y pueda revocarlas si se ven comprometidas.
{{< /caution >}}
-
Ahora podemos crear un pod que haga referencia al Secret con la llave ssh key y lo consuma en un volumen:
```yaml
@@ -691,16 +705,19 @@ Crear un fichero kustomization.yaml con SecretGenerator
```shell
kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11
```
-```
+
+```none
secret "prod-db-secret" created
```
```shell
kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests
```
-```
+
+```none
secret "test-db-secret" created
```
+
{{< note >}}
Caracteres especiales como `$`, `\*`, y `!` requieren ser escapados.
Si el password que estas usando tiene caracteres especiales, necesitas escaparlos usando el caracter `\\` . Por ejemplo, si tu password actual es `S!B\*d$zDsb`, deberías ejecutar el comando de esta forma:
@@ -715,7 +732,7 @@ No necesitas escapar caracteres especiales en contraseñas de los archivos (`--f
Ahora haz los pods:
```shell
-$ cat < pod.yaml
+cat < pod.yaml
apiVersion: v1
kind: List
items:
@@ -759,8 +776,9 @@ EOF
```
Añade los pods a el mismo fichero kustomization.yaml
+
```shell
-$ cat <> kustomization.yaml
+cat <> kustomization.yaml
resources:
- pod.yaml
EOF
@@ -833,7 +851,6 @@ spec:
mountPath: "/etc/secret-volume"
```
-
El `secret-volume` contendrá un solo archivo, llamado `.secret-file`, y
el `dotfile-test-container` tendrá este fichero presente en el path
`/etc/secret-volume/.secret-file`.
@@ -874,7 +891,6 @@ para que los clientes puedan `watch` recursos individuales, y probablemente esta
## Propiedades de seguridad
-
### Protecciones
Debido a que los objetos `Secret` se pueden crear independientemente de los `Pods` que los usan, hay menos riesgo de que el Secret expuesto durante el flujo de trabajo de la creación, visualización, y edición de pods. El sistema también puede tomar precausiones con los objetos`Secret`, tal como eviar escribirlos en el disco siempre que sea posible.
@@ -906,7 +922,4 @@ para datos secretos, para que los Secrets no se almacenen en claro en {{< glossa
- Un usuario que puede crear un pod que usa un Secret también puede ver el valor del Secret. Incluso si una política del apiserver no permite que ese usuario lea el objeto Secret, el usuario puede ejecutar el pod que expone el Secret.
- Actualmente, cualquier persona con root en cualquier nodo puede leer _cualquier_ secret del apiserver, haciéndose pasar por el kubelet. Es una característica planificada enviar Secrets a los nodos que realmente lo requieran, para restringir el impacto de una explosión de root en un single node.
-
-{{% capture whatsnext %}}
-
-{{% /capture %}}
+## {{% heading "whatsnext" %}}
diff --git a/content/fr/docs/setup/learning-environment/minikube.md b/content/fr/docs/setup/learning-environment/minikube.md
index 77be61831fc14..2ab0b3ae5ae99 100644
--- a/content/fr/docs/setup/learning-environment/minikube.md
+++ b/content/fr/docs/setup/learning-environment/minikube.md
@@ -48,10 +48,10 @@ Suivez les étapes ci-dessous pour commencer et explorer Minikube.
Starting local Kubernetes cluster...
```
- Pour plus d'informations sur le démarrage de votre cluster avec une version spécifique de Kubernetes, une machine virtuelle ou un environnement de conteneur, voir [Démarrage d'un cluster].(#starting-a-cluster).
+ Pour plus d'informations sur le démarrage de votre cluster avec une version spécifique de Kubernetes, une machine virtuelle ou un environnement de conteneur, voir [Démarrage d'un cluster](#starting-a-cluster).
2. Vous pouvez maintenant interagir avec votre cluster à l'aide de kubectl.
- Pour plus d'informations, voir [Interagir avec votre cluster.](#interacting-with-your-cluster).
+ Pour plus d'informations, voir [Interagir avec votre cluster](#interacting-with-your-cluster).
Créons un déploiement Kubernetes en utilisant une image existante nommée `echoserver`, qui est un serveur HTTP, et exposez-la sur le port 8080 à l’aide de `--port`.
@@ -529,5 +529,3 @@ Les contributions, questions et commentaires sont les bienvenus et sont encourag
Les développeurs de minikube sont dans le canal #minikube du [Slack](https://kubernetes.slack.com) de Kubernetes (recevoir une invitation [ici](http://slack.kubernetes.io/)).
Nous avons également la liste de diffusion [kubernetes-dev Google Groupes](https://groups.google.com/forum/#!forum/kubernetes-dev).
Si vous publiez sur la liste, veuillez préfixer votre sujet avec "minikube:".
-
-
diff --git a/content/fr/docs/tasks/tools/install-minikube.md b/content/fr/docs/tasks/tools/install-minikube.md
index b079eba6ec666..745f1e4c227ba 100644
--- a/content/fr/docs/tasks/tools/install-minikube.md
+++ b/content/fr/docs/tasks/tools/install-minikube.md
@@ -84,7 +84,7 @@ Vous pouvez télécharger les packages `.deb` depuis [Docker](https://www.docker
{{< caution >}}
Le pilote VM `none` peut entraîner des problèmes de sécurité et de perte de données.
-Avant d'utiliser `--driver=none`, consultez [cette documentation] (https://minikube.sigs.k8s.io/docs/reference/drivers/none/) pour plus d'informations.
+Avant d'utiliser `--driver=none`, consultez [cette documentation](https://minikube.sigs.k8s.io/docs/reference/drivers/none/) pour plus d'informations.
{{ caution >}}
Minikube prend également en charge un `vm-driver=podman` similaire au pilote Docker. Podman est exécuté en tant que superutilisateur (utilisateur root), c'est le meilleur moyen de garantir que vos conteneurs ont un accès complet à toutes les fonctionnalités disponibles sur votre système.
diff --git a/content/id/docs/contribute/_index.md b/content/id/docs/contribute/_index.md
index d793a789672d9..0105a297913f1 100644
--- a/content/id/docs/contribute/_index.md
+++ b/content/id/docs/contribute/_index.md
@@ -75,5 +75,5 @@ terhadap dokumentasi Kubernetes, tetapi daftar ini dapat membantumu memulainya.
- Untuk berkontribusi ke komunitas Kubernetes melalui forum-forum daring seperti Twitter atau Stack Overflow, atau mengetahui tentang pertemuan komunitas (_meetup_) lokal dan acara-acara Kubernetes, kunjungi [situs komunitas Kubernetes](/community/).
- Untuk mulai berkontribusi ke pengembangan fitur, baca [_cheatseet_ kontributor](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet).
-
+- Untuk kontribusi khusus ke halaman Bahansa Indonesia, baca [Dokumentasi Khusus Untuk Translasi Bahasa Indonesia](/docs/contribute/localization_id.md)
diff --git a/content/id/docs/contribute/localization_id.md b/content/id/docs/contribute/localization_id.md
new file mode 100644
index 0000000000000..5a9c491297bc8
--- /dev/null
+++ b/content/id/docs/contribute/localization_id.md
@@ -0,0 +1,178 @@
+---
+title: Dokumentasi Khusus Untuk Translasi Bahasa Indonesia
+content_type: concept
+---
+
+
+
+Panduan khusus untuk bergabung ke komunitas SIG DOC Indonesia dan melakukan
+kontribusi untuk mentranslasikan dokumentasi Kubernetes ke dalam Bahasa
+Indonesia.
+
+
+
+## Manajemen _Milestone_ Tim {#manajemen-milestone-tim}
+
+Secara umum siklus translasi dokumentasi ke Bahasa Indonesia akan dilakukan
+3 kali dalam setahun (sekitar setiap 4 bulan). Untuk menentukan dan mengevaluasi
+pencapaian atau _milestone_ dalam kurun waktu tersebut [jadwal rapat daring
+reguler tim Bahasa Indonesia](https://zoom.us/j/6072809193) dilakukan secara
+konsisten setiap dua minggu sekali. Dalam [agenda rapat ini](https://docs.google.com/document/d/1Qrj-WUAMA11V6KmcfxJsXcPeWwMbFsyBGV4RGbrSRXY)
+juga dilakukan pemilihan PR _Wrangler_ untuk dua minggu ke depan. Tugas PR
+_Wrangler_ tim Bahasa Indonesia serupa dengan PR _Wrangler_ dari proyek
+_upstream_.
+
+Target pencapaian atau _milestone_ tim akan dirilis sebagai
+[_issue tracking_ seperti ini](https://github.com/kubernetes/website/issues/22296)
+pada Kubernetes GitHub Website setiap 4 bulan. Dan bersama dengan informasi
+PR _Wrangler_ yang dipilih setiap dua minggu, keduanya akan diumumkan di Slack
+_channel_ [#kubernetes-docs-id](https://kubernetes.slack.com/archives/CJ1LUCUHM)
+dari Komunitas Kubernetes.
+
+## Cara Memulai Translasi
+
+Untuk menerjemahkan satu halaman Bahasa Inggris ke Bahasa Indonesia, lakukan
+langkah-langkah berikut ini:
+
+* Check halaman _issue_ di GitHub dan pastikan tidak ada orang lain yang sudah
+mengklaim halaman kamu dalam daftar periksa atau komentar-komentar sebelumnya.
+* Klaim halaman kamu pada _issue_ di GitHub dengan memberikan komentar di bawah
+dengan nama halaman yang ingin kamu terjemahkan dan ambillah hanya satu halaman
+dalam satu waktu.
+* _Fork_ [repo ini](https://github.com/kubernetes/website), buat terjemahan
+kamu, dan kirimkan PR (_pull request_) dengan label `language/id`
+* Setelah dikirim, pengulas akan memberikan komentar dalam beberapa hari, dan
+tolong untuk menjawab semua komentar. Direkomendasikan juga untuk melakukan
+[_squash_](https://github.com/wprig/wprig/wiki/How-to-squash-commits) _commit_
+kamu dengan pesan _commit_ yang baik.
+
+
+## Informasi Acuan Untuk Translasi
+
+Tidak ada panduan gaya khusus untuk menulis translasi ke bahasa Indonesia.
+Namun, secara umum kita dapat mengikuti panduan gaya bahasa Inggris dengan
+beberapa tambahan untuk kata-kata impor yang dicetak miring.
+
+Harap berkomitmen dengan terjemahan kamu dan pada saat kamu mendapatkan komentar
+dari pengulas, silahkan atasi sebaik-baiknya. Kami berharap halaman yang
+diklaim akan diterjemahkan dalam waktu kurang lebih dua minggu. Jika ternyata
+kamu tidak dapat berkomitmen lagi, beri tahu para pengulas agar mereka dapat
+meberikan halaman tersebut ke orang lain.
+
+Beberapa acuan tambahan dalam melakukan translasi silahkan lihat informasi
+berikut ini:
+
+### Daftara Glosarium Translasi dari tim SIG DOC Indonesia
+Untuk kata-kata selengkapnya silahkan baca glosariumnya
+[disini](#glosarium-indonesia)
+
+### KBBI
+Konsultasikan dengan KBBI (Kamus Besar Bahasa Indonesia)
+[disini](https://kbbi.web.id/) dari
+[Kemendikbud](https://kbbi.kemdikbud.go.id/).
+
+### RSNI Glosarium dari Ivan Lanin
+[RSNI Glosarium](https://github.com/jk8s/sig-docs-id-localization-how-tos/blob/master/resources/RSNI-glossarium.pdf)
+dapat digunakan untuk memahami bagaimana menerjemahkan berbagai istilah teknis
+dan khusus Kubernetes.
+
+
+## Panduan Penulisan _Source Code_
+
+### Mengikuti kode asli dari dokumentasi bahasa Inggris
+
+Untuk kenyamanan pemeliharaan, ikuti lebar teks asli dalam kode bahasa Inggris.
+Dengan kata lain, jika teks asli ditulis dalam baris yang panjang tanpa putus
+atu baris, maka teks tersebut ditulis panjang dalam satu baris meskipun dalam
+bahasa Indonesia. Jagalah agar tetap serupa.
+
+### Hapus nama reviewer di kode asli bahasa Inggris
+
+Terkadang _reviewer_ ditentukan di bagian atas kode di teks asli Bahasa Inggris.
+Secara umum, _reviewer-reviewer_ halaman aslinya akan kesulitan untuk meninjau
+halaman dalam bahasa Indonesia, jadi hapus kode yang terkait dengan informasi
+_reviewer_ dari metadata kode tersebut.
+
+
+## Panduan Penulisan Kata-kata Translasi
+
+### Panduan umum
+
+* Gunakan "kamu" daripada "Anda" sebagai subyek agar lebih bersahabat dengan
+para pembaca dokumentasi.
+* Tulislah miring untuk kata-kata bahasa Inggris yang diimpor jika kamu tidak
+dapat menemukan kata-kata tersebut dalam bahasa Indonesia.
+*Benar*: _controller_. *Salah*: controller, `controller`
+
+### Panduan untuk kata-kata API Objek Kubernetes
+
+Gunakan gaya "CamelCase" untuk menulis objek API Kubernetes, lihat daftar
+lengkapnya [di sini](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/).
+Sebagai contoh:
+
+* *Benar*: PersistentVolume. *Salah*: volume persisten, `PersistentVolume`,
+persistentVolume
+* *Benar*: Pod. *Salah*: pod, `pod`, "pod"
+
+*Tips* : Biasanya API objek sudah ditulis dalam huruf kapital pada halaman asli
+bahasa Inggris.
+
+### Panduan untuk kata-kata yang sama dengan API Objek Kubernetes
+
+Ada beberapa kata-kata yang serupa dengan nama API objek dari Kubernetes dan
+dapat mengacu ke arti yang lebih umum (tidak selalu dalam konteks Kubernetes).
+Sebagai contoh: _service_, _container_, _node_ , dan lain sebagainya. Kata-kata
+sebaiknya ditranslasikan ke Bahasa Indonesia sebagai contoh _service_ menjadi
+layanan, _container_ menjadi kontainer.
+
+*Tips* : Biasanya kata-kata yang mengacu ke arti yang lebih umum sudah *tidak*
+ditulis dalam huruf kapital pada halaman asli bahasa Inggris.
+
+### Panduan untuk "Feature Gate" Kubernetes
+
+Istilah [_functional gate_](https://kubernetes.io/ko/docs/reference/command-line-tools-reference/feature-gates/)
+Kubernetes tidak perlu diterjemahkan ke dalam bahasa Indonesia dan tetap
+dipertahankan dalam bentuk aslinya.
+
+Contoh dari _functional gate_ adalah sebagai berikut:
+
+- Akselerator
+- AdvancedAuditing
+- AffinityInAnnotations
+- AllowExtTrafficLocalEndpoints
+- ...
+
+### Glosarium Indonesia {#glosarium-indonesia}
+
+Inggris | Tipe Kata | Indonesia | Sumber | Contoh Kalimat
+---|---|---|---|---
+cluster | | klaster | |
+container | | kontainer | |
+node | kata benda | node | |
+file | | berkas | |
+service | kata benda | layanan | |
+set | | sekumpulan | |
+resource | | sumber daya | |
+default | | bawaan atau standar (tergantung context) | | Secara bawaan, ...; Pada konfigurasi dan instalasi standar, ...
+deploy | | menggelar | |
+image | | _image_ | |
+request | | permintaan | |
+object | kata benda | objek | https://kbbi.web.id/objek |
+command | | perintah | https://kbbi.web.id/perintah |
+view | | tampilan | |
+support | | tersedia atau dukungan (tergantung konteks) | "This feature is supported on version X; Fitur ini tersedia pada versi X; Supported by community; Didukung oleh komunitas"
+release | kata benda | rilis | https://kbbi.web.id/rilis |
+tool | | perangkat | |
+deployment | | penggelaran | |
+client | | klien | |
+reference | | rujukan | |
+update | | pembaruan | | The latest update... ; Pembaruan terkini...
+state | | _state_ | |
+task | | _task_ | |
+certificate | | sertifikat | |
+install | | instalasi | https://kbbi.web.id/instalasi |
+scale | | skala | |
+process | kata kerja | memproses | https://kbbi.web.id/proses |
+replica | kata benda | replika | https://kbbi.web.id/replika |
+flag | | tanda, parameter, argumen | |
+event | | _event_ | |
\ No newline at end of file
diff --git a/content/ko/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md b/content/ko/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md
index b7550a3d83fb4..d5405651871f4 100644
--- a/content/ko/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md
+++ b/content/ko/blog/_posts/2020-12-02-dont-panic-kubernetes-and-docker.md
@@ -5,7 +5,9 @@ date: 2020-12-02
slug: dont-panic-kubernetes-and-docker
---
-**작성자:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas
+**저자:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas
+
+**번역:** 박재화(삼성SDS), 손석호(한국전자통신연구원)
쿠버네티스는 v1.20 이후 컨테이너 런타임으로서
[도커를
diff --git a/content/ko/docs/concepts/cluster-administration/system-metrics.md b/content/ko/docs/concepts/cluster-administration/system-metrics.md
index 03eb904ee3889..08b7b79d0d59e 100644
--- a/content/ko/docs/concepts/cluster-administration/system-metrics.md
+++ b/content/ko/docs/concepts/cluster-administration/system-metrics.md
@@ -1,9 +1,5 @@
---
-title: 쿠버네티스 컨트롤 플레인에 대한 메트릭
-
-
-
-
+title: 쿠버네티스 시스템 컴포넌트에 대한 메트릭
content_type: concept
weight: 60
---
@@ -12,7 +8,7 @@ weight: 60
시스템 컴포넌트 메트릭으로 내부에서 발생하는 상황을 더 잘 파악할 수 있다. 메트릭은 대시보드와 경고를 만드는 데 특히 유용하다.
-쿠버네티스 컨트롤 플레인의 메트릭은 [프로메테우스 형식](https://prometheus.io/docs/instrumenting/exposition_formats/)으로 출력된다.
+쿠버네티스 컴포넌트의 메트릭은 [프로메테우스 형식](https://prometheus.io/docs/instrumenting/exposition_formats/)으로 출력된다.
이 형식은 구조화된 평문으로 디자인되어 있으므로 사람과 기계 모두가 쉽게 읽을 수 있다.
@@ -36,7 +32,7 @@ weight: 60
클러스터가 {{< glossary_tooltip term_id="rbac" text="RBAC" >}}을 사용하는 경우, 메트릭을 읽으려면 `/metrics` 에 접근을 허용하는 클러스터롤(ClusterRole)을 가지는 사용자, 그룹 또는 서비스어카운트(ServiceAccount)를 통한 권한이 필요하다.
예를 들면, 다음과 같다.
-```
+```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
@@ -156,5 +152,4 @@ kube-scheduler는 각 파드에 대해 구성된 리소스 [요청과 제한](/k
## {{% heading "whatsnext" %}}
* 메트릭에 대한 [프로메테우스 텍스트 형식](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format)에 대해 읽어본다
-* [안정적인 쿠버네티스 메트릭](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml) 목록을 참고한다
* [쿠버네티스 사용 중단 정책](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)에 대해 읽어본다
diff --git a/content/ko/docs/concepts/configuration/secret.md b/content/ko/docs/concepts/configuration/secret.md
index e5466b8dec2ce..073eb2ff4afd8 100644
--- a/content/ko/docs/concepts/configuration/secret.md
+++ b/content/ko/docs/concepts/configuration/secret.md
@@ -22,6 +22,16 @@ weight: 30
명세나 이미지에 포함될 수 있다. 사용자는 시크릿을 만들 수 있고 시스템도
일부 시크릿을 만들 수 있다.
+{{< caution >}}
+쿠버네티스 시크릿은 기본적으로 암호화되지 않은 base64 인코딩 문자열로 저장된다.
+기본적으로 API 액세스 권한이 있는 모든 사용자 또는 쿠버네티스의 기본 데이터 저장소 etcd에
+액세스할 수 있는 모든 사용자가 일반 텍스트로 검색 할 수 있다.
+시크릿을 안전하게 사용하려면 (최소한) 다음과 같이 하는 것이 좋다.
+
+1. 시크릿에 대한 [암호화 활성화](/docs/tasks/administer-cluster/encrypt-data/).
+2. 시크릿 읽기 및 쓰기를 제한하는 [RBAC 규칙 활성화 또는 구성](/docs/reference/access-authn-authz/authorization/). 파드를 만들 권한이 있는 모든 사용자는 시크릿을 암묵적으로 얻을 수 있다.
+{{< /caution >}}
+
## 시크릿 개요
@@ -269,6 +279,13 @@ SSH 인증 시크릿 타입은 사용자 편의만을 위해서 제공된다.
API 서버는 요구되는 키가 시크릿 구성에서 제공되고 있는지
검증도 한다.
+{{< caution >}}
+SSH 개인 키는 자체적으로 SSH 클라이언트와 호스트 서버간에 신뢰할 수있는 통신을
+설정하지 않는다. ConfigMap에 추가된 `known_hosts` 파일과 같은
+"중간자(man in the middle)" 공격을 완화하려면 신뢰를 설정하는
+2차 수단이 필요하다.
+{{< /caution >}}
+
### TLS 시크릿
쿠버네티스는 보통 TLS를 위해 사용되는 인증서와 관련된 키를 저장하기 위해서
@@ -786,7 +803,6 @@ immutable: true
수동으로 생성된 시크릿(예: GitHub 계정에 접근하기 위한 토큰이 포함된 시크릿)은
시크릿의 서비스 어카운트를 기반한 파드에 자동으로 연결될 수 있다.
-해당 프로세스에 대한 자세한 설명은 [파드프리셋(PodPreset)을 사용하여 파드에 정보 주입하기](/docs/tasks/inject-data-application/podpreset/)를 참고한다.
## 상세 내용
@@ -1233,3 +1249,4 @@ API 서버에서 kubelet으로의 통신은 SSL/TLS로 보호된다.
- [`kubectl` 을 사용한 시크릿 관리](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)하는 방법 배우기
- [구성 파일을 사용한 시크릿 관리](/docs/tasks/configmap-secret/managing-secret-using-config-file/)하는 방법 배우기
- [kustomize를 사용한 시크릿 관리](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)하는 방법 배우기
+
diff --git a/content/ko/docs/concepts/containers/container-lifecycle-hooks.md b/content/ko/docs/concepts/containers/container-lifecycle-hooks.md
index 662ac71522d05..d4bdc174039a5 100644
--- a/content/ko/docs/concepts/containers/container-lifecycle-hooks.md
+++ b/content/ko/docs/concepts/containers/container-lifecycle-hooks.md
@@ -54,7 +54,7 @@ Angular와 같이, 컴포넌트 라이프사이클 훅을 가진 많은 프로
컨테이너 라이프사이클 관리 훅이 호출되면,
쿠버네티스 관리 시스템은 훅 동작에 따라 핸들러를 실행하고,
-`exec` 와 `tcpSocket` 은 컨테이너에서 실행되고, `httpGet` 은 kubelet 프로세스에 의해 실행된다.
+`httpGet` 와 `tcpSocket` 은 kubelet 프로세스에 의해 실행되고, `exec` 은 컨테이너에서 실행된다.
훅 핸들러 호출은 해당 컨테이너를 포함하고 있는 파드의 컨텍스트와 동기적으로 동작한다.
이것은 `PostStart` 훅에 대해서,
diff --git a/content/ko/docs/concepts/containers/runtime-class.md b/content/ko/docs/concepts/containers/runtime-class.md
index b31cabe88862f..3d7c89b65c370 100644
--- a/content/ko/docs/concepts/containers/runtime-class.md
+++ b/content/ko/docs/concepts/containers/runtime-class.md
@@ -1,7 +1,4 @@
---
-
-
-
title: 런타임클래스(RuntimeClass)
content_type: concept
weight: 20
@@ -35,10 +32,6 @@ weight: 20
## 셋업
-런타임클래스 기능 게이트가 활성화(기본값)된 것을 확인한다.
-기능 게이트 활성화에 대한 설명은 [기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)를
-참고한다. `RuntimeClass` 기능 게이트는 API 서버 _및_ kubelets에서 활성화되어야 한다.
-
1. CRI 구현(implementation)을 노드에 설정(런타임에 따라서).
2. 상응하는 런타임클래스 리소스 생성.
@@ -144,11 +137,9 @@ https://github.com/containerd/cri/blob/master/docs/config.md
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
-쿠버네티스 v1.16 부터, 런타임 클래스는 `scheduling` 필드를 통해 이종의 클러스터
-지원을 포함한다. 이 필드를 사용하면, 이 런타임 클래스를 갖는 파드가 이를 지원하는
-노드로 스케줄된다는 것을 보장할 수 있다. 이 스케줄링 기능을 사용하려면,
-[런타임 클래스 어드미션(admission) 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#runtimeclass)를
-활성화(1.16 부터 기본값)해야 한다.
+RuntimeClass에 `scheduling` 필드를 지정하면, 이 RuntimeClass로 실행되는 파드가
+이를 지원하는 노드로 예약되도록 제약 조건을 설정할 수 있다.
+`scheduling`이 설정되지 않은 경우 이 RuntimeClass는 모든 노드에서 지원되는 것으로 간주된다.
파드가 지정된 런타임클래스를 지원하는 노드에 안착한다는 것을 보장하려면,
해당 노드들은 `runtimeClass.scheduling.nodeSelector` 필드에서 선택되는 공통 레이블을 가져야한다.
diff --git a/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md b/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md
index d1eecd6fdce45..ee9763a769a42 100644
--- a/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md
+++ b/content/ko/docs/concepts/extend-kubernetes/extend-cluster.md
@@ -69,7 +69,7 @@ weight: 10
웹훅 모델에서 쿠버네티스는 원격 서비스에 네트워크 요청을 한다.
*바이너리 플러그인* 모델에서 쿠버네티스는 바이너리(프로그램)를 실행한다.
바이너리 플러그인은 kubelet(예:
-[Flex Volume 플러그인](/ko/docs/concepts/storage/volumes/#flexvolume)과
+[Flex 볼륨 플러그인](/ko/docs/concepts/storage/volumes/#flexvolume)과
[네트워크 플러그인](/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/))과
kubectl에서
사용한다.
@@ -157,7 +157,7 @@ API를 추가해도 기존 API(예: 파드)의 동작에 직접 영향을 미치
### 스토리지 플러그인
-[Flex Volumes](/ko/docs/concepts/storage/volumes/#flexvolume)을 사용하면
+[Flex 볼륨](/ko/docs/concepts/storage/volumes/#flexvolume)을 사용하면
Kubelet이 바이너리 플러그인을 호출하여 볼륨을 마운트하도록 함으로써
빌트인 지원 없이 볼륨 유형을 마운트 할 수 있다.
diff --git a/content/ko/docs/concepts/services-networking/ingress-controllers.md b/content/ko/docs/concepts/services-networking/ingress-controllers.md
index 3af939488e267..06340143a0d2b 100644
--- a/content/ko/docs/concepts/services-networking/ingress-controllers.md
+++ b/content/ko/docs/concepts/services-networking/ingress-controllers.md
@@ -9,11 +9,11 @@ weight: 40
인그레스 리소스가 작동하려면, 클러스터는 실행 중인 인그레스 컨트롤러가 반드시 필요하다.
-kube-controller-manager 바이너리의 일부로 실행되는 컨트롤러의 다른 타입과 달리 인그레스 컨트롤러는
+`kube-controller-manager` 바이너리의 일부로 실행되는 컨트롤러의 다른 타입과 달리 인그레스 컨트롤러는
클러스터와 함께 자동으로 실행되지 않는다.
클러스터에 가장 적합한 인그레스 컨트롤러 구현을 선택하는데 이 페이지를 사용한다.
-프로젝트로써 쿠버네티스는 [AWS](https://github.com/kubernetes-sigs/aws-load-balancer-controller#readme), [GCE](https://git.k8s.io/ingress-gce/README.md#readme)와
+프로젝트로서 쿠버네티스는 [AWS](https://github.com/kubernetes-sigs/aws-load-balancer-controller#readme), [GCE](https://git.k8s.io/ingress-gce/README.md#readme)와
[nginx](https://git.k8s.io/ingress-nginx/README.md#readme) 인그레스 컨트롤러를 지원하고 유지한다.
@@ -26,6 +26,7 @@ kube-controller-manager 바이너리의 일부로 실행되는 컨트롤러의
* [AKS 애플리케이션 게이트웨이 인그레스 컨트롤러] (https://azure.github.io/application-gateway-kubernetes-ingress/)는 [Azure 애플리케이션 게이트웨이](https://docs.microsoft.com)를 구성하는 인그레스 컨트롤러다.
* [Ambassador](https://www.getambassador.io/) API 게이트웨이는 [Envoy](https://www.envoyproxy.io) 기반 인그레스
컨트롤러다.
+* [Apache APISIX 인그레스 컨트롤러](https://github.com/apache/apisix-ingress-controller)는 [Apache APISIX](https://github.com/apache/apisix) 기반의 인그레스 컨트롤러이다.
* [Avi 쿠버네티스 오퍼레이터](https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes)는 [VMware NSX Advanced Load Balancer](https://avinetworks.com/)을 사용하는 L4-L7 로드 밸런싱을 제공한다.
* [Citrix 인그레스 컨트롤러](https://github.com/citrix/citrix-k8s-ingress-controller#readme)는
Citrix 애플리케이션 딜리버리 컨트롤러에서 작동한다.
@@ -42,7 +43,7 @@ kube-controller-manager 바이너리의 일부로 실행되는 컨트롤러의
기반 인그레스 컨트롤러다.
* [쿠버네티스 용 Kong 인그레스 컨트롤러](https://github.com/Kong/kubernetes-ingress-controller#readme)는 [Kong 게이트웨이](https://konghq.com/kong/)를
구동하는 인그레스 컨트롤러다.
-* [쿠버네티스 용 NGINX 인그레스 컨트롤러](https://www.nginx.com/products/nginx/kubernetes-ingress-controller)는 [NGINX](https://www.nginx.com/resources/glossary)
+* [쿠버네티스 용 NGINX 인그레스 컨트롤러](https://www.nginx.com/products/nginx-ingress-controller/)는 [NGINX](https://www.nginx.com/resources/glossary/nginx/)
웹서버(프록시로 사용)와 함께 작동한다.
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/)는 사용자의 커스텀 프록시를 구축하기 위한 라이브러리로 설계된 쿠버네티스 인그레스와 같은 유스케이스를 포함한 서비스 구성을 위한 HTTP 라우터 및 역방향 프록시다.
* [Traefik 쿠버네티스 인그레스 제공자](https://doc.traefik.io/traefik/providers/kubernetes-ingress/)는
diff --git a/content/ko/docs/concepts/services-networking/ingress.md b/content/ko/docs/concepts/services-networking/ingress.md
index 5b91356437030..a55302f059e45 100644
--- a/content/ko/docs/concepts/services-networking/ingress.md
+++ b/content/ko/docs/concepts/services-networking/ingress.md
@@ -376,7 +376,7 @@ graph LR;
트래픽을 일치 시킬 수 있다.
예를 들어, 다음 인그레스는 `first.bar.com`에 요청된 트래픽을
-`service1`로, `second.foo.com`는 `service2`로, 호스트 이름이 정의되지
+`service1`로, `second.bar.com`는 `service2`로, 호스트 이름이 정의되지
않은(즉, 요청 헤더가 표시 되지 않는) IP 주소로의 모든
트래픽은 `service3`로 라우팅 한다.
diff --git a/content/ko/docs/concepts/services-networking/service.md b/content/ko/docs/concepts/services-networking/service.md
index da9d353d6f86e..5d1acf5392397 100644
--- a/content/ko/docs/concepts/services-networking/service.md
+++ b/content/ko/docs/concepts/services-networking/service.md
@@ -134,7 +134,7 @@ spec:
* 한 서비스에서 다른
{{< glossary_tooltip term_id="namespace" text="네임스페이스">}} 또는 다른 클러스터의 서비스를 지정하려고 한다.
* 워크로드를 쿠버네티스로 마이그레이션하고 있다. 해당 방식을 평가하는 동안,
- 쿠버네티스에서는 일정 비율의 백엔드만 실행한다.
+ 쿠버네티스에서는 백엔드의 일부만 실행한다.
이러한 시나리오 중에서 파드 셀렉터 _없이_ 서비스를 정의 할 수 있다.
예를 들면
diff --git a/content/ko/docs/concepts/workloads/controllers/job.md b/content/ko/docs/concepts/workloads/controllers/job.md
index 0f04051ff1f09..64b5d3879d6cc 100644
--- a/content/ko/docs/concepts/workloads/controllers/job.md
+++ b/content/ko/docs/concepts/workloads/controllers/job.md
@@ -13,7 +13,7 @@ weight: 50
-잡에서 하나 이상의 파드를 생성하고 지정된 수의 파드가 성공적으로 종료되도록 한다.
+잡에서 하나 이상의 파드를 생성하고 지정된 수의 파드가 성공적으로 종료될 때까지 계속해서 파드의 실행을 재시도한다.
파드가 성공적으로 완료되면, 성공적으로 완료된 잡을 추적한다. 지정된 수의
성공 완료에 도달하면, 작업(즉, 잡)이 완료된다. 잡을 삭제하면 잡이 생성한
파드가 정리된다.
diff --git a/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md b/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md
index a9412662309d4..5ed869fb576cc 100644
--- a/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md
+++ b/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md
@@ -76,4 +76,4 @@ TTL 컨트롤러는 쿠버네티스 리소스에
* [자동으로 잡 정리](/ko/docs/concepts/workloads/controllers/job/#완료된-잡을-자동으로-정리)
-* [디자인 문서](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md)
+* [디자인 문서](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/README.md)
diff --git a/content/ko/docs/reference/_index.md b/content/ko/docs/reference/_index.md
index 14fee6ee9caa2..c294f2efb591a 100644
--- a/content/ko/docs/reference/_index.md
+++ b/content/ko/docs/reference/_index.md
@@ -18,7 +18,8 @@ content_type: concept
## API 레퍼런스
-* [쿠버네티스 API 레퍼런스 {{< param "version" >}}](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
+* [쿠버네티스 API 레퍼런스](/docs/reference/kubernetes-api/)
+* [쿠버네티스 {{< param "version" >}}용 원페이지(One-page) API 레퍼런스](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
* [쿠버네티스 API 사용](/ko/docs/reference/using-api/) - 쿠버네티스 API에 대한 개요
## API 클라이언트 라이브러리
diff --git a/content/ko/docs/reference/command-line-tools-reference/feature-gates.md b/content/ko/docs/reference/command-line-tools-reference/feature-gates.md
index 6fa2c58a56f9c..715dbf480103e 100644
--- a/content/ko/docs/reference/command-line-tools-reference/feature-gates.md
+++ b/content/ko/docs/reference/command-line-tools-reference/feature-gates.md
@@ -48,13 +48,15 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
| 기능 | 디폴트 | 단계 | 도입 | 종료 |
|---------|---------|-------|-------|-------|
-| `AnyVolumeDataSource` | `false` | 알파 | 1.18 | |
| `APIListChunking` | `false` | 알파 | 1.8 | 1.8 |
| `APIListChunking` | `true` | 베타 | 1.9 | |
| `APIPriorityAndFairness` | `false` | 알파 | 1.17 | 1.19 |
| `APIPriorityAndFairness` | `true` | 베타 | 1.20 | |
-| `APIResponseCompression` | `false` | 알파 | 1.7 | |
+| `APIResponseCompression` | `false` | 알파 | 1.7 | 1.15 |
+| `APIResponseCompression` | `false` | 베타 | 1.16 | |
| `APIServerIdentity` | `false` | 알파 | 1.20 | |
+| `AllowInsecureBackendProxy` | `true` | 베타 | 1.17 | |
+| `AnyVolumeDataSource` | `false` | 알파 | 1.18 | |
| `AppArmor` | `true` | 베타 | 1.4 | |
| `BalanceAttachedNodeVolumes` | `false` | 알파 | 1.11 | |
| `BoundServiceAccountTokenVolume` | `false` | 알파 | 1.13 | |
@@ -77,7 +79,8 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
| `CSIMigrationGCE` | `false` | 알파 | 1.14 | 1.16 |
| `CSIMigrationGCE` | `false` | 베타 | 1.17 | |
| `CSIMigrationGCEComplete` | `false` | 알파 | 1.17 | |
-| `CSIMigrationOpenStack` | `false` | 알파 | 1.14 | |
+| `CSIMigrationOpenStack` | `false` | 알파 | 1.14 | 1.17 |
+| `CSIMigrationOpenStack` | `true` | 베타 | 1.18 | |
| `CSIMigrationOpenStackComplete` | `false` | 알파 | 1.17 | |
| `CSIMigrationvSphere` | `false` | 베타 | 1.19 | |
| `CSIMigrationvSphereComplete` | `false` | 베타 | 1.19 | |
@@ -89,26 +92,23 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
| `ConfigurableFSGroupPolicy` | `true` | 베타 | 1.20 | |
| `CronJobControllerV2` | `false` | 알파 | 1.20 | |
| `CustomCPUCFSQuotaPeriod` | `false` | 알파 | 1.12 | |
-| `CustomResourceDefaulting` | `false` | 알파| 1.15 | 1.15 |
-| `CustomResourceDefaulting` | `true` | 베타 | 1.16 | |
| `DefaultPodTopologySpread` | `false` | 알파 | 1.19 | 1.19 |
| `DefaultPodTopologySpread` | `true` | 베타 | 1.20 | |
| `DevicePlugins` | `false` | 알파 | 1.8 | 1.9 |
| `DevicePlugins` | `true` | 베타 | 1.10 | |
| `DisableAcceleratorUsageMetrics` | `false` | 알파 | 1.19 | 1.19 |
-| `DisableAcceleratorUsageMetrics` | `true` | 베타 | 1.20 | 1.22 |
+| `DisableAcceleratorUsageMetrics` | `true` | 베타 | 1.20 | |
| `DownwardAPIHugePages` | `false` | 알파 | 1.20 | |
-| `DryRun` | `false` | 알파 | 1.12 | 1.12 |
-| `DryRun` | `true` | 베타 | 1.13 | |
| `DynamicKubeletConfig` | `false` | 알파 | 1.4 | 1.10 |
| `DynamicKubeletConfig` | `true` | 베타 | 1.11 | |
+| `EfficientWatchResumption` | `false` | 알파 | 1.20 | |
| `EndpointSlice` | `false` | 알파 | 1.16 | 1.16 |
| `EndpointSlice` | `false` | 베타 | 1.17 | |
| `EndpointSlice` | `true` | 베타 | 1.18 | |
| `EndpointSliceNodeName` | `false` | 알파 | 1.20 | |
| `EndpointSliceProxying` | `false` | 알파 | 1.18 | 1.18 |
| `EndpointSliceProxying` | `true` | 베타 | 1.19 | |
-| `EndpointSliceTerminating` | `false` | 알파 | 1.20 | |
+| `EndpointSliceTerminatingCondition` | `false` | 알파 | 1.20 | |
| `EphemeralContainers` | `false` | 알파 | 1.16 | |
| `ExpandCSIVolumes` | `false` | 알파 | 1.14 | 1.15 |
| `ExpandCSIVolumes` | `true` | 베타 | 1.16 | |
@@ -119,19 +119,22 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
| `ExperimentalHostUserNamespaceDefaulting` | `false` | 베타 | 1.5 | |
| `GenericEphemeralVolume` | `false` | 알파 | 1.19 | |
| `GracefulNodeShutdown` | `false` | 알파 | 1.20 | |
+| `HPAContainerMetrics` | `false` | 알파 | 1.20 | |
| `HPAScaleToZero` | `false` | 알파 | 1.16 | |
| `HugePageStorageMediumSize` | `false` | 알파 | 1.18 | 1.18 |
| `HugePageStorageMediumSize` | `true` | 베타 | 1.19 | |
-| `HyperVContainer` | `false` | 알파 | 1.10 | |
+| `IPv6DualStack` | `false` | 알파 | 1.15 | |
| `ImmutableEphemeralVolumes` | `false` | 알파 | 1.18 | 1.18 |
| `ImmutableEphemeralVolumes` | `true` | 베타 | 1.19 | |
-| `IPv6DualStack` | `false` | 알파 | 1.16 | |
-| `LegacyNodeRoleBehavior` | `true` | 알파 | 1.16 | |
+| `KubeletCredentialProviders` | `false` | 알파 | 1.20 | |
+| `KubeletPodResources` | `true` | 알파 | 1.13 | 1.14 |
+| `KubeletPodResources` | `true` | 베타 | 1.15 | |
+| `LegacyNodeRoleBehavior` | `false` | 알파 | 1.16 | 1.18 |
+| `LegacyNodeRoleBehavior` | `true` | True | 1.19 | |
| `LocalStorageCapacityIsolation` | `false` | 알파 | 1.7 | 1.9 |
| `LocalStorageCapacityIsolation` | `true` | 베타 | 1.10 | |
| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | 알파 | 1.15 | |
| `MixedProtocolLBService` | `false` | 알파 | 1.20 | |
-| `MountContainers` | `false` | 알파 | 1.9 | |
| `NodeDisruptionExclusion` | `false` | 알파 | 1.16 | 1.18 |
| `NodeDisruptionExclusion` | `true` | 베타 | 1.19 | |
| `NonPreemptingPriority` | `false` | 알파 | 1.15 | 1.18 |
@@ -143,25 +146,27 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
| `ProcMountType` | `false` | 알파 | 1.12 | |
| `QOSReserved` | `false` | 알파 | 1.11 | |
| `RemainingItemCount` | `false` | 알파 | 1.15 | |
+| `RemoveSelfLink` | `false` | 알파 | 1.16 | 1.19 |
+| `RemoveSelfLink` | `true` | 베타 | 1.20 | |
| `RootCAConfigMap` | `false` | 알파 | 1.13 | 1.19 |
| `RootCAConfigMap` | `true` | 베타 | 1.20 | |
| `RotateKubeletServerCertificate` | `false` | 알파 | 1.7 | 1.11 |
| `RotateKubeletServerCertificate` | `true` | 베타 | 1.12 | |
| `RunAsGroup` | `true` | 베타 | 1.14 | |
-| `RuntimeClass` | `false` | 알파 | 1.12 | 1.13 |
-| `RuntimeClass` | `true` | 베타 | 1.14 | |
| `SCTPSupport` | `false` | 알파 | 1.12 | 1.18 |
| `SCTPSupport` | `true` | 베타 | 1.19 | |
| `ServerSideApply` | `false` | 알파 | 1.14 | 1.15 |
| `ServerSideApply` | `true` | 베타 | 1.16 | |
-| `ServiceAccountIssuerDiscovery` | `false` | 알파 | 1.18 | |
-| `ServiceLBNodePortControl` | `false` | 알파 | 1.20 | 1.20 |
+| `ServiceAccountIssuerDiscovery` | `false` | 알파 | 1.18 | 1.19 |
+| `ServiceAccountIssuerDiscovery` | `true` | 베타 | 1.20 | |
+| `ServiceLBNodePortControl` | `false` | 알파 | 1.20 | |
| `ServiceNodeExclusion` | `false` | 알파 | 1.8 | 1.18 |
| `ServiceNodeExclusion` | `true` | 베타 | 1.19 | |
| `ServiceTopology` | `false` | 알파 | 1.17 | |
-| `SizeMemoryBackedVolumes` | `false` | 알파 | 1.20 | |
| `SetHostnameAsFQDN` | `false` | 알파 | 1.19 | 1.19 |
| `SetHostnameAsFQDN` | `true` | 베타 | 1.20 | |
+| `SizeMemoryBackedVolumes` | `false` | 알파 | 1.20 | |
+| `StorageVersionAPI` | `false` | 알파 | 1.20 | |
| `StorageVersionHash` | `false` | 알파 | 1.14 | 1.14 |
| `StorageVersionHash` | `true` | 베타 | 1.15 | |
| `Sysctls` | `true` | 베타 | 1.11 | |
@@ -170,11 +175,11 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
| `TopologyManager` | `true` | 베타 | 1.18 | |
| `ValidateProxyRedirects` | `false` | 알파 | 1.12 | 1.13 |
| `ValidateProxyRedirects` | `true` | 베타 | 1.14 | |
-| `WindowsEndpointSliceProxying` | `false` | 알파 | 1.19 | |
-| `WindowsGMSA` | `false` | 알파 | 1.14 | |
-| `WindowsGMSA` | `true` | 베타 | 1.16 | |
+| `WarningHeaders` | `true` | 베타 | 1.19 | |
| `WinDSR` | `false` | 알파 | 1.14 | |
-| `WinOverlay` | `false` | 알파 | 1.14 | |
+| `WinOverlay` | `false` | 알파 | 1.14 | 1.19 |
+| `WinOverlay` | `true` | 베타 | 1.20 | |
+| `WindowsEndpointSliceProxying` | `false` | 알파 | 1.19 | |
{{< /table >}}
### GA 또는 사용 중단된 기능을 위한 기능 게이트
@@ -228,6 +233,9 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
| `CustomResourceWebhookConversion` | `false` | 알파 | 1.13 | 1.14 |
| `CustomResourceWebhookConversion` | `true` | 베타 | 1.15 | 1.15 |
| `CustomResourceWebhookConversion` | `true` | GA | 1.16 | - |
+| `DryRun` | `false` | 알파 | 1.12 | 1.12 |
+| `DryRun` | `true` | 베타 | 1.13 | 1.18 |
+| `DryRun` | `true` | GA | 1.19 | - |
| `DynamicAuditing` | `false` | 알파 | 1.13 | 1.18 |
| `DynamicAuditing` | - | 사용중단 | 1.19 | - |
| `DynamicProvisioningScheduling` | `false` | 알파 | 1.11 | 1.11 |
@@ -247,23 +255,28 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
| `HugePages` | `false` | 알파 | 1.8 | 1.9 |
| `HugePages` | `true` | 베타| 1.10 | 1.13 |
| `HugePages` | `true` | GA | 1.14 | - |
+| `HyperVContainer` | `false` | 알파 | 1.10 | 1.19 |
+| `HyperVContainer` | `false` | 사용중단 | 1.20 | - |
| `Initializers` | `false` | 알파 | 1.7 | 1.13 |
| `Initializers` | - | 사용중단 | 1.14 | - |
| `KubeletConfigFile` | `false` | 알파 | 1.8 | 1.9 |
| `KubeletConfigFile` | - | 사용중단 | 1.10 | - |
-| `KubeletCredentialProviders` | `false` | 알파 | 1.20 | 1.20 |
| `KubeletPluginsWatcher` | `false` | 알파 | 1.11 | 1.11 |
| `KubeletPluginsWatcher` | `true` | 베타 | 1.12 | 1.12 |
| `KubeletPluginsWatcher` | `true` | GA | 1.13 | - |
| `KubeletPodResources` | `false` | 알파 | 1.13 | 1.14 |
| `KubeletPodResources` | `true` | 베타 | 1.15 | |
| `KubeletPodResources` | `true` | GA | 1.20 | |
+| `MountContainers` | `false` | 알파 | 1.9 | 1.16 |
+| `MountContainers` | `false` | 사용중단 | 1.17 | - |
| `MountPropagation` | `false` | 알파 | 1.8 | 1.9 |
| `MountPropagation` | `true` | 베타 | 1.10 | 1.11 |
| `MountPropagation` | `true` | GA | 1.12 | - |
| `NodeLease` | `false` | 알파 | 1.12 | 1.13 |
| `NodeLease` | `true` | 베타 | 1.14 | 1.16 |
| `NodeLease` | `true` | GA | 1.17 | - |
+| `PVCProtection` | `false` | 알파 | 1.9 | 1.9 |
+| `PVCProtection` | - | 사용중단 | 1.10 | - |
| `PersistentLocalVolumes` | `false` | 알파 | 1.7 | 1.9 |
| `PersistentLocalVolumes` | `true` | 베타 | 1.10 | 1.13 |
| `PersistentLocalVolumes` | `true` | GA | 1.14 | - |
@@ -276,8 +289,6 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
| `PodShareProcessNamespace` | `false` | 알파 | 1.10 | 1.11 |
| `PodShareProcessNamespace` | `true` | 베타 | 1.12 | 1.16 |
| `PodShareProcessNamespace` | `true` | GA | 1.17 | - |
-| `PVCProtection` | `false` | 알파 | 1.9 | 1.9 |
-| `PVCProtection` | - | 사용중단 | 1.10 | - |
| `RequestManagement` | `false` | 알파 | 1.15 | 1.16 |
| `ResourceLimitsPriorityFunction` | `false` | 알파 | 1.9 | 1.18 |
| `ResourceLimitsPriorityFunction` | - | 사용중단 | 1.19 | - |
@@ -398,62 +409,131 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
각 기능 게이트는 특정 기능을 활성화/비활성화하도록 설계되었다.
+- `APIListChunking`: API 클라이언트가 API 서버에서 (`LIST` 또는 `GET`)
+ 리소스를 청크(chunks)로 검색할 수 있도록 한다.
+- `APIPriorityAndFairness`: 각 서버의 우선 순위와 공정성을 통해 동시 요청을
+ 관리할 수 있다. (`RequestManagement` 에서 이름이 변경됨)
+- `APIResponseCompression`: `LIST` 또는 `GET` 요청에 대한 API 응답을 압축한다.
+- `APIServerIdentity`: 클러스터의 각 API 서버에 ID를 할당한다.
- `Accelerators`: 도커 사용 시 Nvidia GPU 지원 활성화한다.
- `AdvancedAuditing`: [고급 감사](/docs/tasks/debug-application-cluster/audit/#advanced-audit) 기능을 활성화한다.
-- `AffinityInAnnotations`(*사용 중단됨*): [파드 어피니티 또는 안티-어피니티](/ko/docs/concepts/scheduling-eviction/assign-pod-node/#어피니티-affinity-와-안티-어피니티-anti-affinity) 설정을 활성화한다.
+- `AffinityInAnnotations`(*사용 중단됨*): [파드 어피니티 또는 안티-어피니티](/ko/docs/concepts/scheduling-eviction/assign-pod-node/#어피니티-affinity-와-안티-어피니티-anti-affinity)
+ 설정을 활성화한다.
- `AllowExtTrafficLocalEndpoints`: 서비스가 외부 요청을 노드의 로컬 엔드포인트로 라우팅할 수 있도록 한다.
+- `AllowInsecureBackendProxy`: 사용자가 파드 로그 요청에서 kubelet의
+ TLS 확인을 건너뛸 수 있도록 한다.
- `AnyVolumeDataSource`: {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}의
`DataSource` 로 모든 사용자 정의 리소스 사용을 활성화한다.
-- `APIListChunking`: API 클라이언트가 API 서버에서 (`LIST` 또는 `GET`) 리소스를 청크(chunks)로 검색할 수 있도록 한다.
-- `APIPriorityAndFairness`: 각 서버의 우선 순위와 공정성을 통해 동시 요청을 관리할 수 있다. (`RequestManagement` 에서 이름이 변경됨)
-- `APIResponseCompression`: `LIST` 또는 `GET` 요청에 대한 API 응답을 압축한다.
-- `APIServerIdentity`: 클러스터의 각 kube-apiserver에 ID를 할당한다.
- `AppArmor`: 도커를 사용할 때 리눅스 노드에서 AppArmor 기반의 필수 접근 제어를 활성화한다.
- 자세한 내용은 [AppArmor 튜토리얼](/ko/docs/tutorials/clusters/apparmor/)을 참고한다.
+ 자세한 내용은 [AppArmor 튜토리얼](/ko/docs/tutorials/clusters/apparmor/)을 참고한다.
- `AttachVolumeLimit`: 볼륨 플러그인이 노드에 연결될 수 있는 볼륨 수에
대한 제한을 보고하도록 한다.
- 자세한 내용은 [동적 볼륨 제한](/ko/docs/concepts/storage/storage-limits/#동적-볼륨-한도)을 참고한다.
+ 자세한 내용은 [동적 볼륨 제한](/ko/docs/concepts/storage/storage-limits/#동적-볼륨-한도)을 참고한다.
- `BalanceAttachedNodeVolumes`: 스케줄링 시 균형 잡힌 리소스 할당을 위해 고려할 노드의 볼륨 수를
포함한다. 스케줄러가 결정을 내리는 동안 CPU, 메모리 사용률 및 볼륨 수가
더 가까운 노드가 선호된다.
- `BlockVolume`: 파드에서 원시 블록 장치의 정의와 사용을 활성화한다.
- 자세한 내용은 [원시 블록 볼륨 지원](/ko/docs/concepts/storage/persistent-volumes/#원시-블록-볼륨-지원)을
- 참고한다.
+ 자세한 내용은 [원시 블록 볼륨 지원](/ko/docs/concepts/storage/persistent-volumes/#원시-블록-볼륨-지원)을
+ 참고한다.
- `BoundServiceAccountTokenVolume`: ServiceAccountTokenVolumeProjection으로 구성된 프로젝션 볼륨을 사용하도록 서비스어카운트 볼륨을
- 마이그레이션한다. 클러스터 관리자는 `serviceaccount_stale_tokens_total` 메트릭을 사용하여
- 확장 토큰에 의존하는 워크로드를 모니터링 할 수 있다. 이러한 워크로드가 없는 경우 `--service-account-extend-token-expiration=false` 플래그로
- `kube-apiserver`를 시작하여 확장 토큰 기능을 끈다.
- 자세한 내용은 [바운드 서비스 계정 토큰](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)을
- 확인한다.
-- `ConfigurableFSGroupPolicy`: 파드에 볼륨을 마운트할 때 fsGroups에 대한 볼륨 권한 변경 정책을 구성할 수 있다. 자세한 내용은 [파드에 대한 볼륨 권한 및 소유권 변경 정책 구성](/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods)을 참고한다.
--`CronJobControllerV2` : {{< glossary_tooltip text="크론잡" term_id="cronjob" >}} 컨트롤러의 대체 구현을 사용한다. 그렇지 않으면 동일한 컨트롤러의 버전 1이 선택된다. 버전 2 컨트롤러는 실험적인 성능 향상을 제공한다.
-- `CPUManager`: 컨테이너 수준의 CPU 어피니티 지원을 활성화한다. [CPU 관리 정책](/docs/tasks/administer-cluster/cpu-management-policies/)을 참고한다.
+ 마이그레이션한다. 클러스터 관리자는 `serviceaccount_stale_tokens_total` 메트릭을 사용하여
+ 확장 토큰에 의존하는 워크로드를 모니터링 할 수 있다. 이러한 워크로드가 없는 경우 `--service-account-extend-token-expiration=false` 플래그로
+ `kube-apiserver`를 시작하여 확장 토큰 기능을 끈다.
+ 자세한 내용은 [바운드 서비스 계정 토큰](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)을
+ 확인한다.
+- `CPUManager`: 컨테이너 수준의 CPU 어피니티 지원을 활성화한다.
+ [CPU 관리 정책](/docs/tasks/administer-cluster/cpu-management-policies/)을 참고한다.
- `CRIContainerLogRotation`: cri 컨테이너 런타임에 컨테이너 로그 로테이션을 활성화한다.
-- `CSIBlockVolume`: 외부 CSI 볼륨 드라이버가 블록 스토리지를 지원할 수 있게 한다. 자세한 내용은 [`csi` 원시 블록 볼륨 지원](/ko/docs/concepts/storage/volumes/#csi-원시-raw-블록-볼륨-지원) 문서를 참고한다.
-- `CSIDriverRegistry`: csi.storage.k8s.io에서 CSIDriver API 오브젝트와 관련된 모든 로직을 활성화한다.
+- `CSIBlockVolume`: 외부 CSI 볼륨 드라이버가 블록 스토리지를 지원할 수 있게 한다.
+ 자세한 내용은 [`csi` 원시 블록 볼륨 지원](/ko/docs/concepts/storage/volumes/#csi-원시-raw-블록-볼륨-지원)
+ 문서를 참고한다.
+- `CSIDriverRegistry`: csi.storage.k8s.io에서 CSIDriver API 오브젝트와 관련된
+ 모든 로직을 활성화한다.
- `CSIInlineVolume`: 파드에 대한 CSI 인라인 볼륨 지원을 활성화한다.
-- `CSIMigration`: shim 및 변환 로직을 통해 볼륨 작업을 인-트리 플러그인에서 사전 설치된 해당 CSI 플러그인으로 라우팅할 수 있다.
-- `CSIMigrationAWS`: shim 및 변환 로직을 통해 볼륨 작업을 AWS-EBS 인-트리 플러그인에서 EBS CSI 플러그인으로 라우팅할 수 있다. 노드에 EBS CSI 플러그인이 설치와 구성이 되어 있지 않은 경우 인-트리 EBS 플러그인으로 폴백(falling back)을 지원한다. CSIMigration 기능 플래그가 필요하다.
-- `CSIMigrationAWSComplete`: kubelet 및 볼륨 컨트롤러에서 EBS 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 사용하여 볼륨 작업을 AWS-EBS 인-트리 플러그인에서 EBS CSI 플러그인으로 라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAWS 기능 플래그가 활성화되고 EBS CSI 플러그인이 설치 및 구성이 되어 있어야 한다.
-- `CSIMigrationAzureDisk`: shim 및 변환 로직을 통해 볼륨 작업을 Azure-Disk 인-트리 플러그인에서 AzureDisk CSI 플러그인으로 라우팅할 수 있다. 노드에 AzureDisk CSI 플러그인이 설치와 구성이 되어 있지 않은 경우 인-트리 AzureDisk 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다.
-- `CSIMigrationAzureDiskComplete`: kubelet 및 볼륨 컨트롤러에서 Azure-Disk 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 사용하여 볼륨 작업을 Azure-Disk 인-트리 플러그인에서 AzureDisk CSI 플러그인으로 라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAzureDisk 기능 플래그가 활성화되고 AzureDisk CSI 플러그인이 설치 및 구성이 되어 있어야 한다.
-- `CSIMigrationAzureFile`: shim 및 변환 로직을 통해 볼륨 작업을 Azure-File 인-트리 플러그인에서 AzureFile CSI 플러그인으로 라우팅할 수 있다. 노드에 AzureFile CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 AzureFile 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다.
-- `CSIMigrationAzureFileComplete`: kubelet 및 볼륨 컨트롤러에서 Azure 파일 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 통해 볼륨 작업을 Azure 파일 인-트리 플러그인에서 AzureFile CSI 플러그인으로 라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAzureFile 기능 플래그가 활성화되고 AzureFile CSI 플러그인이 설치 및 구성이 되어 있어야 한다.
-- `CSIMigrationGCE`: shim 및 변환 로직을 통해 볼륨 작업을 GCE-PD 인-트리 플러그인에서 PD CSI 플러그인으로 라우팅할 수 있다. 노드에 PD CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 GCE 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다.
-- `CSIMigrationGCEComplete`: kubelet 및 볼륨 컨트롤러에서 GCE-PD 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 통해 볼륨 작업을 GCE-PD 인-트리 플러그인에서 PD CSI 플러그인으로 라우팅할 수 있다. CSIMigration과 CSIMigrationGCE 기능 플래그가 필요하다.
-- `CSIMigrationOpenStack`: shim 및 변환 로직을 통해 볼륨 작업을 Cinder 인-트리 플러그인에서 Cinder CSI 플러그인으로 라우팅할 수 있다. 노드에 Cinder CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 Cinder 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다.
-- `CSIMigrationOpenStackComplete`: kubelet 및 볼륨 컨트롤러에서 Cinder 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직이 Cinder 인-트리 플러그인에서 Cinder CSI 플러그인으로 볼륨 작업을 라우팅할 수 있도록 한다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationOpenStack 기능 플래그가 활성화되고 Cinder CSI 플러그인이 설치 및 구성이 되어 있어야 한다.
-- `CSIMigrationvSphere`: vSphere 인-트리 플러그인에서 vSphere CSI 플러그인으로 볼륨 작업을 라우팅하는 shim 및 변환 로직을 사용한다. 노드에 vSphere CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 vSphere 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다.
-- `CSIMigrationvSphereComplete`: kubelet 및 볼륨 컨트롤러에서 vSphere 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 활성화하여 vSphere 인-트리 플러그인에서 vSphere CSI 플러그인으로 볼륨 작업을 라우팅할 수 있도록 한다. CSIMigration 및 CSIMigrationvSphere 기능 플래그가 활성화되고 vSphere CSI 플러그인이 클러스터의 모든 노드에 설치 및 구성이 되어 있어야 한다.
+- `CSIMigration`: shim 및 변환 로직을 통해 볼륨 작업을 인-트리 플러그인에서
+ 사전 설치된 해당 CSI 플러그인으로 라우팅할 수 있다.
+- `CSIMigrationAWS`: shim 및 변환 로직을 통해 볼륨 작업을
+ AWS-EBS 인-트리 플러그인에서 EBS CSI 플러그인으로 라우팅할 수 있다. 노드에
+ EBS CSI 플러그인이 설치와 구성이 되어 있지 않은 경우 인-트리 EBS 플러그인으로
+ 폴백(falling back)을 지원한다. CSIMigration 기능 플래그가 필요하다.
+- `CSIMigrationAWSComplete`: kubelet 및 볼륨 컨트롤러에서 EBS 인-트리
+ 플러그인 등록을 중지하고 shim 및 변환 로직을 사용하여 볼륨 작업을 AWS-EBS
+ 인-트리 플러그인에서 EBS CSI 플러그인으로 라우팅할 수 있다.
+ 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAWS 기능 플래그가 활성화되고
+ EBS CSI 플러그인이 설치 및 구성이 되어 있어야 한다.
+- `CSIMigrationAzureDisk`: shim 및 변환 로직을 통해 볼륨 작업을
+ Azure-Disk 인-트리 플러그인에서 AzureDisk CSI 플러그인으로 라우팅할 수 있다.
+ 노드에 AzureDisk CSI 플러그인이 설치와 구성이 되어 있지 않은 경우 인-트리
+ AzureDisk 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가
+ 필요하다.
+- `CSIMigrationAzureDiskComplete`: kubelet 및 볼륨 컨트롤러에서 Azure-Disk 인-트리
+ 플러그인 등록을 중지하고 shim 및 변환 로직을 사용하여 볼륨 작업을
+ Azure-Disk 인-트리 플러그인에서 AzureDisk CSI 플러그인으로
+ 라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAzureDisk 기능
+ 플래그가 활성화되고 AzureDisk CSI 플러그인이 설치 및 구성이 되어
+ 있어야 한다.
+- `CSIMigrationAzureFile`: shim 및 변환 로직을 통해 볼륨 작업을
+ Azure-File 인-트리 플러그인에서 AzureFile CSI 플러그인으로 라우팅할 수 있다.
+ 노드에 AzureFile CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리
+ AzureFile 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가
+ 필요하다.
+- `CSIMigrationAzureFileComplete`: kubelet 및 볼륨 컨트롤러에서 Azure 파일 인-트리
+ 플러그인 등록을 중지하고 shim 및 변환 로직을 통해 볼륨 작업을
+ Azure 파일 인-트리 플러그인에서 AzureFile CSI 플러그인으로
+ 라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAzureFile 기능
+ 플래그가 활성화되고 AzureFile CSI 플러그인이 설치 및 구성이 되어
+ 있어야 한다.
+- `CSIMigrationGCE`: shim 및 변환 로직을 통해 볼륨 작업을
+ GCE-PD 인-트리 플러그인에서 PD CSI 플러그인으로 라우팅할 수 있다. 노드에
+ PD CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 GCE 플러그인으로 폴백을
+ 지원한다. CSIMigration 기능 플래그가 필요하다.
+- `CSIMigrationGCEComplete`: kubelet 및 볼륨 컨트롤러에서 GCE-PD
+ 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 통해 볼륨 작업을 GCE-PD
+ 인-트리 플러그인에서 PD CSI 플러그인으로 라우팅할 수 있다.
+ CSIMigration과 CSIMigrationGCE 기능 플래그가 활성화되고 PD CSI
+ 플러그인이 클러스터의 모든 노드에 설치 및 구성이 되어 있어야 한다.
+- `CSIMigrationOpenStack`: shim 및 변환 로직을 통해 볼륨 작업을
+ Cinder 인-트리 플러그인에서 Cinder CSI 플러그인으로 라우팅할 수 있다. 노드에
+ Cinder CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리
+ Cinder 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다.
+- `CSIMigrationOpenStackComplete`: kubelet 및 볼륨 컨트롤러에서
+ Cinder 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직이 Cinder 인-트리
+ 플러그인에서 Cinder CSI 플러그인으로 볼륨 작업을 라우팅할 수 있도록 한다.
+ 클러스터의 모든 노드에 CSIMigration과 CSIMigrationOpenStack 기능 플래그가 활성화되고
+ Cinder CSI 플러그인이 설치 및 구성이 되어 있어야 한다.
+- `CSIMigrationvSphere`: vSphere 인-트리 플러그인에서 vSphere CSI 플러그인으로 볼륨 작업을
+ 라우팅하는 shim 및 변환 로직을 사용한다.
+ 노드에 vSphere CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우
+ 인-트리 vSphere 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다.
+- `CSIMigrationvSphereComplete`: kubelet 및 볼륨 컨트롤러에서 vSphere 인-트리
+ 플러그인 등록을 중지하고 shim 및 변환 로직을 활성화하여 vSphere 인-트리 플러그인에서
+ vSphere CSI 플러그인으로 볼륨 작업을 라우팅할 수 있도록 한다. CSIMigration 및
+ CSIMigrationvSphere 기능 플래그가 활성화되고 vSphere CSI 플러그인이
+ 클러스터의 모든 노드에 설치 및 구성이 되어 있어야 한다.
- `CSINodeInfo`: csi.storage.k8s.io에서 CSINodeInfo API 오브젝트와 관련된 모든 로직을 활성화한다.
- `CSIPersistentVolume`: [CSI (Container Storage Interface)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)
호환 볼륨 플러그인을 통해 프로비저닝된 볼륨을 감지하고
마운트할 수 있다.
-- `CSIServiceAccountToken` : 볼륨을 마운트하는 파드의 서비스 계정 토큰을 받을 수 있도록 CSI 드라이버를 활성화한다. [토큰 요청](https://kubernetes-csi.github.io/docs/token-requests.html)을 참조한다.
-- `CSIStorageCapacity`: CSI 드라이버가 스토리지 용량 정보를 게시하고 쿠버네티스 스케줄러가 파드를 스케줄할 때 해당 정보를 사용하도록 한다. [스토리지 용량](/docs/concepts/storage/storage-capacity/)을 참고한다.
+- `CSIServiceAccountToken` : 볼륨을 마운트하는 파드의 서비스 계정 토큰을 받을 수 있도록
+ CSI 드라이버를 활성화한다.
+ [토큰 요청](https://kubernetes-csi.github.io/docs/token-requests.html)을 참조한다.
+- `CSIStorageCapacity`: CSI 드라이버가 스토리지 용량 정보를 게시하고
+ 쿠버네티스 스케줄러가 파드를 스케줄할 때 해당 정보를 사용하도록 한다.
+ [스토리지 용량](/docs/concepts/storage/storage-capacity/)을 참고한다.
자세한 내용은 [`csi` 볼륨 유형](/ko/docs/concepts/storage/volumes/#csi) 문서를 확인한다.
-- `CSIVolumeFSGroupPolicy`: CSI드라이버가 `fsGroupPolicy` 필드를 사용하도록 허용한다. 이 필드는 CSI드라이버에서 생성된 볼륨이 마운트될 때 볼륨 소유권과 권한 수정을 지원하는지 여부를 제어한다.
-- `CustomCPUCFSQuotaPeriod`: 노드가 CPUCFSQuotaPeriod를 변경하도록 한다.
+- `CSIVolumeFSGroupPolicy`: CSI드라이버가 `fsGroupPolicy` 필드를 사용하도록 허용한다.
+ 이 필드는 CSI드라이버에서 생성된 볼륨이 마운트될 때 볼륨 소유권과
+ 권한 수정을 지원하는지 여부를 제어한다.
+- `ConfigurableFSGroupPolicy`: 사용자가 파드에 볼륨을 마운트할 때 fsGroups에 대한
+ 볼륨 권한 변경 정책을 구성할 수 있다. 자세한 내용은
+ [파드의 볼륨 권한 및 소유권 변경 정책 구성](/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods)을
+ 참고한다.
+- `CronJobControllerV2`: {{< glossary_tooltip text="크론잡(CronJob)" term_id="cronjob" >}}
+ 컨트롤러의 대체 구현을 사용한다. 그렇지 않으면,
+ 동일한 컨트롤러의 버전 1이 선택된다.
+ 버전 2 컨트롤러는 실험적인 성능 향상을 제공한다.
+- `CustomCPUCFSQuotaPeriod`: [kubelet config](/docs/tasks/administer-cluster/kubelet-config-file/)에서
+ `cpuCFSQuotaPeriod` 를 노드가 변경할 수 있도록 한다.
- `CustomPodDNS`: `dnsConfig` 속성을 사용하여 파드의 DNS 설정을 사용자 정의할 수 있다.
자세한 내용은 [파드의 DNS 설정](/ko/docs/concepts/services-networking/dns-pod-service/#pod-dns-config)을
확인한다.
@@ -466,147 +546,248 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
- `CustomResourceWebhookConversion`: [커스텀리소스데피니션](/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources/)에서
생성된 리소스에 대해 웹 훅 기반의 변환을 활성화한다.
실행 중인 파드 문제를 해결한다.
-- `DisableAcceleratorUsageMetrics`: [kubelet이 수집한 액셀러레이터 지표 비활성화](/ko/docs/concepts/cluster-administration/system-metrics/#액셀러레이터-메트릭-비활성화).
-- `DevicePlugins`: 노드에서 [장치 플러그인](/ko/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
- 기반 리소스 프로비저닝을 활성화한다.
- `DefaultPodTopologySpread`: `PodTopologySpread` 스케줄링 플러그인을 사용하여
[기본 분배](/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints/#내부-기본-제약)를 수행한다.
-- `DownwardAPIHugePages`: 다운워드 API에서 hugepages 사용을 활성화한다.
+- `DevicePlugins`: 노드에서 [장치 플러그인](/ko/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
+ 기반 리소스 프로비저닝을 활성화한다.
+- `DisableAcceleratorUsageMetrics`:
+ [kubelet이 수집한 액셀러레이터 지표 비활성화](/ko/docs/concepts/cluster-administration/system-metrics/#액셀러레이터-메트릭-비활성화).
+- `DownwardAPIHugePages`: [다운워드 API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information)에서
+ hugepages 사용을 활성화한다.
- `DryRun`: 서버 측의 [dry run](/docs/reference/using-api/api-concepts/#dry-run) 요청을
요청을 활성화하여 커밋하지 않고 유효성 검사, 병합 및 변화를 테스트할 수 있다.
- `DynamicAuditing`(*사용 중단됨*): v1.19 이전의 버전에서 동적 감사를 활성화하는 데 사용된다.
-- `DynamicKubeletConfig`: kubelet의 동적 구성을 활성화한다. [kubelet 재구성](/docs/tasks/administer-cluster/reconfigure-kubelet/)을 참고한다.
-- `DynamicProvisioningScheduling`: 볼륨 스케줄을 인식하고 PV 프로비저닝을 처리하도록 기본 스케줄러를 확장한다.
+- `DynamicKubeletConfig`: kubelet의 동적 구성을 활성화한다.
+ [kubelet 재구성](/docs/tasks/administer-cluster/reconfigure-kubelet/)을 참고한다.
+- `DynamicProvisioningScheduling`: 볼륨 토폴로지를 인식하고 PV 프로비저닝을 처리하도록
+ 기본 스케줄러를 확장한다.
이 기능은 v1.12의 `VolumeScheduling` 기능으로 대체되었다.
-- `DynamicVolumeProvisioning`(*사용 중단됨*): 파드에 퍼시스턴트 볼륨의 [동적 프로비저닝](/ko/docs/concepts/storage/dynamic-provisioning/)을 활성화한다.
-- `EnableAggregatedDiscoveryTimeout` (*사용 중단됨*): 수집된 검색 호출에서 5초 시간 초과를 활성화한다.
-- `EnableEquivalenceClassCache`: 스케줄러가 파드를 스케줄링할 때 노드의 동등성을 캐시할 수 있게 한다.
-- `EphemeralContainers`: 파드를 실행하기 위한 {{< glossary_tooltip text="임시 컨테이너"
- term_id="ephemeral-container" >}}를 추가할 수 있다.
-- `EvenPodsSpread`: 토폴로지 도메인 간에 파드를 균등하게 스케줄링할 수 있다. [파드 토폴로지 분배 제약 조건](/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints/)을 참고한다.
--`ExecProbeTimeout` : kubelet이 exec 프로브 시간 초과를 준수하는지 확인한다. 이 기능 게이트는 기존 워크로드가 쿠버네티스가 exec 프로브 제한 시간을 무시한 현재 수정된 결함에 의존하는 경우 존재한다. [준비성 프로브](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes)를 참조한다.
-- `ExpandInUsePersistentVolumes`: 사용 중인 PVC를 확장할 수 있다. [사용 중인 퍼시스턴트볼륨클레임 크기 조정](/ko/docs/concepts/storage/persistent-volumes/#사용-중인-퍼시스턴트볼륨클레임-크기-조정)을 참고한다.
-- `ExpandPersistentVolumes`: 퍼시스턴트 볼륨 확장을 활성화한다. [퍼시스턴트 볼륨 클레임 확장](/ko/docs/concepts/storage/persistent-volumes/#퍼시스턴트-볼륨-클레임-확장)을 참고한다.
-- `ExperimentalCriticalPodAnnotation`: 특정 파드에 *critical* 로 어노테이션을 달아서 [스케줄링이 보장되도록](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) 한다.
+- `DynamicVolumeProvisioning`(*사용 중단됨*): 파드에 퍼시스턴트 볼륨의
+ [동적 프로비저닝](/ko/docs/concepts/storage/dynamic-provisioning/)을 활성화한다.
+- `EfficientWatchResumption`: 스토리지에서 생성된 북마크(진행
+ 알림) 이벤트를 사용자에게 전달할 수 있다. 이것은 감시 작업에만
+ 적용된다.
+- `EnableAggregatedDiscoveryTimeout` (*사용 중단됨*): 수집된 검색 호출에서 5초
+ 시간 초과를 활성화한다.
+- `EnableEquivalenceClassCache`: 스케줄러가 파드를 스케줄링할 때 노드의
+ 동등성을 캐시할 수 있게 한다.
+- `EndpointSlice`: 보다 스케일링 가능하고 확장 가능한 네트워크 엔드포인트에 대한
+ 엔드포인트슬라이스(EndpointSlices)를 활성화한다. [엔드포인트슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다.
+- `EndpointSliceNodeName` : 엔드포인트슬라이스 `nodeName` 필드를 활성화한다.
+- `EndpointSliceProxying`: 활성화되면, 리눅스에서 실행되는
+ kube-proxy는 엔드포인트 대신 엔드포인트슬라이스를
+ 기본 데이터 소스로 사용하여 확장성과 성능을 향상시킨다.
+ [엔드포인트 슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다.
+- `EndpointSliceTerminatingCondition`: 엔드포인트슬라이스 `terminating` 및 `serving`
+ 조건 필드를 활성화한다.
+- `EphemeralContainers`: 파드를 실행하기 위한
+ {{< glossary_tooltip text="임시 컨테이너" term_id="ephemeral-container" >}}를
+ 추가할 수 있다.
+- `EvenPodsSpread`: 토폴로지 도메인 간에 파드를 균등하게 스케줄링할 수 있다.
+ [파드 토폴로지 분배 제약 조건](/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints/)을 참고한다.
+- `ExecProbeTimeout` : kubelet이 exec 프로브 시간 초과를 준수하는지 확인한다.
+ 이 기능 게이트는 기존 워크로드가 쿠버네티스가 exec 프로브 제한 시간을 무시한
+ 현재 수정된 결함에 의존하는 경우 존재한다.
+ [준비성 프로브](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes)를 참조한다.
+- `ExpandCSIVolumes`: CSI 볼륨 확장을 활성화한다.
+- `ExpandInUsePersistentVolumes`: 사용 중인 PVC를 확장할 수 있다.
+ [사용 중인 퍼시스턴트볼륨클레임 크기 조정](/ko/docs/concepts/storage/persistent-volumes/#사용-중인-퍼시스턴트볼륨클레임-크기-조정)을 참고한다.
+- `ExpandPersistentVolumes`: 퍼시스턴트 볼륨 확장을 활성화한다.
+ [퍼시스턴트 볼륨 클레임 확장](/ko/docs/concepts/storage/persistent-volumes/#퍼시스턴트-볼륨-클레임-확장)을 참고한다.
+- `ExperimentalCriticalPodAnnotation`: 특정 파드에 *critical* 로
+ 어노테이션을 달아서 [스케줄링이 보장되도록](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) 한다.
이 기능은 v1.13부터 파드 우선 순위 및 선점으로 인해 사용 중단되었다.
- `ExperimentalHostUserNamespaceDefaultingGate`: 사용자 네임스페이스를 호스트로
기본 활성화한다. 이것은 다른 호스트 네임스페이스, 호스트 마운트,
권한이 있는 컨테이너 또는 특정 비-네임스페이스(non-namespaced) 기능(예: `MKNODE`, `SYS_MODULE` 등)을
사용하는 컨테이너를 위한 것이다. 도커 데몬에서 사용자 네임스페이스
재 매핑이 활성화된 경우에만 활성화해야 한다.
-- `EndpointSlice`: 보다 스케일링 가능하고 확장 가능한 네트워크 엔드포인트에 대한
- 엔드포인트 슬라이스를 활성화한다. [엔드포인트 슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다.
--`EndpointSliceNodeName` : 엔드포인트슬라이스 `nodeName` 필드를 활성화한다.
--`EndpointSliceTerminating` : 엔드포인트슬라이스 `terminating` 및 `serving` 조건 필드를
- 활성화한다.
-- `EndpointSliceProxying`: 이 기능 게이트가 활성화되면, 리눅스에서 실행되는
- kube-proxy는 엔드포인트 대신 엔드포인트슬라이스를
- 기본 데이터 소스로 사용하여 확장성과 성능을 향상시킨다.
- [엔드포인트 슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다.
-- `WindowsEndpointSliceProxying`: 이 기능 게이트가 활성화되면, 윈도우에서 실행되는
- kube-proxy는 엔드포인트 대신 엔드포인트슬라이스를
- 기본 데이터 소스로 사용하여 확장성과 성능을 향상시킨다.
- [엔드포인트 슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다.
- `GCERegionalPersistentDisk`: GCE에서 지역 PD 기능을 활성화한다.
-- `GenericEphemeralVolume`: 일반 볼륨의 모든 기능을 지원하는 임시, 인라인 볼륨을 활성화한다(타사 스토리지 공급 업체, 스토리지 용량 추적, 스냅샷으로부터 복원 등에서 제공할 수 있음). [임시 볼륨](/docs/concepts/storage/ephemeral-volumes/)을 참고한다.
--`GracefulNodeShutdown` : kubelet에서 정상 종료를 지원한다. 시스템 종료 중에 kubelet은 종료 이벤트를 감지하고 노드에서 실행중인 파드를 정상적으로 종료하려고 시도한다. 자세한 내용은 [Graceful Node Shutdown](/ko/docs/concepts/architecture/nodes/#그레이스풀-graceful-노드-셧다운)을 참조한다.
-- `HugePages`: 사전 할당된 [huge page](/ko/docs/tasks/manage-hugepages/scheduling-hugepages/)의 할당 및 사용을 활성화한다.
-- `HugePageStorageMediumSize`: 사전 할당된 [huge page](/ko/docs/tasks/manage-hugepages/scheduling-hugepages/)의 여러 크기를 지원한다.
-- `HyperVContainer`: 윈도우 컨테이너를 위한 [Hyper-V 격리](https://docs.microsoft.com/ko-kr/virtualization/windowscontainers/manage-containers/hyperv-container) 기능을 활성화한다.
-- `HPAScaleToZero`: 사용자 정의 또는 외부 메트릭을 사용할 때 `HorizontalPodAutoscaler` 리소스에 대해 `minReplicas` 를 0으로 설정한다.
-- `ImmutableEphemeralVolumes`: 안정성과 성능 향상을 위해 개별 시크릿(Secret)과 컨피그맵(ConfigMap)을 변경할 수 없는(immutable) 것으로 표시할 수 있다.
-- `KubeletConfigFile`: 구성 파일을 사용하여 지정된 파일에서 kubelet 구성을 로드할 수 있다.
- 자세한 내용은 [구성 파일을 통해 kubelet 파라미터 설정](/docs/tasks/administer-cluster/kubelet-config-file/)을 참고한다.
+- `GenericEphemeralVolume`: 일반 볼륨의 모든 기능을 지원하는 임시, 인라인
+ 볼륨을 활성화한다(타사 스토리지 공급 업체, 스토리지 용량 추적, 스냅샷으로부터 복원
+ 등에서 제공할 수 있음).
+ [임시 볼륨](/docs/concepts/storage/ephemeral-volumes/)을 참고한다.
+- `GracefulNodeShutdown` : kubelet에서 정상 종료를 지원한다.
+ 시스템 종료 중에 kubelet은 종료 이벤트를 감지하고 노드에서 실행 중인
+ 파드를 정상적으로 종료하려고 시도한다. 자세한 내용은
+ [Graceful Node Shutdown](/ko/docs/concepts/architecture/nodes/#그레이스풀-graceful-노드-셧다운)을
+ 참조한다.
+- `HPAContainerMetrics`: `HorizontalPodAutoscaler`를 활성화하여 대상 파드의
+ 개별 컨테이너 메트릭을 기반으로 확장한다.
+- `HPAScaleToZero`: 사용자 정의 또는 외부 메트릭을 사용할 때 `HorizontalPodAutoscaler` 리소스에 대해
+ `minReplicas` 를 0으로 설정한다.
+- `HugePages`: 사전 할당된 [huge page](/ko/docs/tasks/manage-hugepages/scheduling-hugepages/)의
+ 할당 및 사용을 활성화한다.
+- `HugePageStorageMediumSize`: 사전 할당된 [huge page](/ko/docs/tasks/manage-hugepages/scheduling-hugepages/)의
+ 여러 크기를 지원한다.
+- `HyperVContainer`: 윈도우 컨테이너를 위한
+ [Hyper-V 격리](https://docs.microsoft.com/ko-kr/virtualization/windowscontainers/manage-containers/hyperv-container)
+ 기능을 활성화한다.
+- `IPv6DualStack`: IPv6에 대한 [듀얼 스택](/ko/docs/concepts/services-networking/dual-stack/)
+ 지원을 활성화한다.
+- `ImmutableEphemeralVolumes`: 안정성과 성능 향상을 위해 개별 시크릿(Secret)과 컨피그맵(ConfigMap)을
+ 변경할 수 없는(immutable) 것으로 표시할 수 있다.
+- `KubeletConfigFile`: 구성 파일을 사용하여 지정된 파일에서
+ kubelet 구성을 로드할 수 있다.
+ 자세한 내용은 [구성 파일을 통해 kubelet 파라미터 설정](/docs/tasks/administer-cluster/kubelet-config-file/)을
+ 참고한다.
- `KubeletCredentialProviders`: 이미지 풀 자격 증명에 대해 kubelet exec 자격 증명 공급자를 활성화한다.
- `KubeletPluginsWatcher`: kubelet이 [CSI 볼륨 드라이버](/ko/docs/concepts/storage/volumes/#csi)와 같은
플러그인을 검색할 수 있도록 프로브 기반 플러그인 감시자(watcher) 유틸리티를 사용한다.
-- `KubeletPodResources`: kubelet의 파드 리소스 grpc 엔드포인트를 활성화한다.
- 자세한 내용은 [장치 모니터링 지원](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md)을 참고한다.
-- `LegacyNodeRoleBehavior`: 비활성화되면, 서비스 로드 밸런서 및 노드 중단의 레거시 동작은 `NodeDisruptionExclusion` 과 `ServiceNodeExclusion` 에 의해 제공된 기능별 레이블을 대신하여 `node-role.kubernetes.io/master` 레이블을 무시한다.
-- `LocalStorageCapacityIsolation`: [로컬 임시 스토리지](/ko/docs/concepts/configuration/manage-resources-containers/)와 [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir)의 `sizeLimit` 속성을 사용할 수 있게 한다.
-- `LocalStorageCapacityIsolationFSQuotaMonitoring`: [로컬 임시 스토리지](/ko/docs/concepts/configuration/manage-resources-containers/)에 `LocalStorageCapacityIsolation` 이 활성화되고 [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir)의 백업 파일시스템이 프로젝트 쿼터를 지원하고 활성화된 경우, 파일시스템 사용보다는 프로젝트 쿼터를 사용하여 [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir) 스토리지 사용을 모니터링하여 성능과 정확성을 향상시킨다.
-- `MixedProtocolLBService`: 동일한 로드밸런서 유형 서비스 인스턴스에서 다른 프로토콜 사용을 활성화한다.
-- `MountContainers`: 호스트의 유틸리티 컨테이너를 볼륨 마운터로 사용할 수 있다.
+- `KubeletPodResources`: kubelet의 파드 리소스 GPRC 엔드포인트를 활성화한다. 자세한 내용은
+ [장치 모니터링 지원](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md)을
+ 참고한다.
+- `LegacyNodeRoleBehavior`: 비활성화되면, 서비스 로드 밸런서 및 노드 중단의 레거시 동작은
+ `NodeDisruptionExclusion` 과 `ServiceNodeExclusion` 에 의해 제공된 기능별 레이블을 대신하여
+ `node-role.kubernetes.io/master` 레이블을 무시한다.
+- `LocalStorageCapacityIsolation`: [로컬 임시 스토리지](/ko/docs/concepts/configuration/manage-resources-containers/)와
+ [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir)의
+ `sizeLimit` 속성을 사용할 수 있게 한다.
+- `LocalStorageCapacityIsolationFSQuotaMonitoring`: [로컬 임시 스토리지](/ko/docs/concepts/configuration/manage-resources-containers/)에
+ `LocalStorageCapacityIsolation` 이 활성화되고
+ [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir)의
+ 백업 파일시스템이 프로젝트 쿼터를 지원하고 활성화된 경우, 파일시스템 사용보다는
+ 프로젝트 쿼터를 사용하여 [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir)
+ 스토리지 사용을 모니터링하여 성능과 정확성을
+ 향상시킨다.
+- `MixedProtocolLBService`: 동일한 로드밸런서 유형 서비스 인스턴스에서 다른 프로토콜
+ 사용을 활성화한다.
+- `MountContainers` (*사용 중단됨*): 호스트의 유틸리티 컨테이너를 볼륨 마운터로
+ 사용할 수 있다.
- `MountPropagation`: 한 컨테이너에서 다른 컨테이너 또는 파드로 마운트된 볼륨을 공유할 수 있다.
자세한 내용은 [마운트 전파(propagation)](/ko/docs/concepts/storage/volumes/#마운트-전파-propagation)을 참고한다.
-- `NodeDisruptionExclusion`: 영역(zone) 장애 시 노드가 제외되지 않도록 노드 레이블 `node.kubernetes.io/exclude-disruption` 사용을 활성화한다.
+- `NodeDisruptionExclusion`: 영역(zone) 장애 시 노드가 제외되지 않도록 노드 레이블 `node.kubernetes.io/exclude-disruption`
+ 사용을 활성화한다.
- `NodeLease`: 새로운 리스(Lease) API가 노드 상태 신호로 사용될 수 있는 노드 하트비트(heartbeats)를 보고할 수 있게 한다.
-- `NonPreemptingPriority`: 프라이어리티클래스(PriorityClass)와 파드에 NonPreempting 옵션을 활성화한다.
+- `NonPreemptingPriority`: 프라이어리티클래스(PriorityClass)와 파드에 `preemptionPolicy` 필드를 활성화한다.
+- `PVCProtection`: 파드에서 사용 중일 때 퍼시스턴트볼륨클레임(PVC)이
+ 삭제되지 않도록 한다.
- `PersistentLocalVolumes`: 파드에서 `local` 볼륨 유형의 사용을 활성화한다.
`local` 볼륨을 요청하는 경우 파드 어피니티를 지정해야 한다.
- `PodDisruptionBudget`: [PodDisruptionBudget](/docs/tasks/run-application/configure-pdb/) 기능을 활성화한다.
-- `PodOverhead`: 파드 오버헤드를 판단하기 위해 [파드오버헤드(PodOverhead)](/ko/docs/concepts/scheduling-eviction/pod-overhead/) 기능을 활성화한다.
-- `PodPriority`: [우선 순위](/ko/docs/concepts/configuration/pod-priority-preemption/)를 기반으로 파드의 스케줄링 취소와 선점을 활성화한다.
+- `PodOverhead`: 파드 오버헤드를 판단하기 위해 [파드오버헤드(PodOverhead)](/ko/docs/concepts/scheduling-eviction/pod-overhead/)
+ 기능을 활성화한다.
+- `PodPriority`: [우선 순위](/ko/docs/concepts/configuration/pod-priority-preemption/)를
+ 기반으로 파드의 스케줄링 취소와 선점을 활성화한다.
- `PodReadinessGates`: 파드 준비성 평가를 확장하기 위해
`PodReadinessGate` 필드 설정을 활성화한다. 자세한 내용은 [파드의 준비성 게이트](/ko/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate)를
참고한다.
- `PodShareProcessNamespace`: 파드에서 실행되는 컨테이너 간에 단일 프로세스 네임스페이스를
공유하기 위해 파드에서 `shareProcessNamespace` 설정을 활성화한다. 자세한 내용은
[파드의 컨테이너 간 프로세스 네임스페이스 공유](/docs/tasks/configure-pod-container/share-process-namespace/)에서 확인할 수 있다.
-- `ProcMountType`: 컨테이너의 ProcMountType 제어를 활성화한다.
-- `PVCProtection`: 파드에서 사용 중일 때 퍼시스턴트볼륨클레임(PVC)이
- 삭제되지 않도록 한다.
-- `QOSReserved`: QoS 수준에서 리소스 예약을 허용하여 낮은 QoS 수준의 파드가 더 높은 QoS 수준에서
- 요청된 리소스로 파열되는 것을 방지한다(현재 메모리만 해당).
+- `ProcMountType`: SecurityContext의 `procMount` 필드를 설정하여
+ 컨테이너의 proc 타입의 마운트를 제어할 수 있다.
+- `QOSReserved`: QoS 수준에서 리소스 예약을 허용하여 낮은 QoS 수준의 파드가
+ 더 높은 QoS 수준에서 요청된 리소스로 파열되는 것을 방지한다
+ (현재 메모리만 해당).
+- `RemainingItemCount`: API 서버가
+ [청크(chunking) 목록 요청](/docs/reference/using-api/api-concepts/#retrieving-large-results-sets-in-chunks)에 대한
+ 응답에서 남은 항목 수를 표시하도록 허용한다.
+- `RemoveSelfLink`: ObjectMeta 및 ListMeta에서 `selfLink` 를 사용하지 않고
+ 제거한다.
- `ResourceLimitsPriorityFunction` (*사용 중단됨*): 입력 파드의 CPU 및 메모리 한도 중
하나 이상을 만족하는 노드에 가능한 최저 점수 1을 할당하는
스케줄러 우선 순위 기능을 활성화한다. 의도는 동일한 점수를 가진
노드 사이의 관계를 끊는 것이다.
- `ResourceQuotaScopeSelectors`: 리소스 쿼터 범위 셀렉터를 활성화한다.
-- `RootCAConfigMap`: 모든 네임 스페이스에 `kube-root-ca.crt`라는 {{< glossary_tooltip text="컨피그맵" term_id="configmap" >}}을 게시하도록 kube-controller-manager를 구성한다. 이 컨피그맵에는 kube-apiserver에 대한 연결을 확인하는 데 사용되는 CA 번들이 포함되어 있다.
- 자세한 내용은 [바운드 서비스 계정 토큰](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)을 참조한다.
+- `RootCAConfigMap`: 모든 네임스페이스에 `kube-root-ca.crt`라는
+ {{< glossary_tooltip text="컨피그맵" term_id="configmap" >}}을 게시하도록
+ `kube-controller-manager` 를 구성한다. 이 컨피그맵에는 kube-apiserver에 대한 연결을 확인하는 데
+ 사용되는 CA 번들이 포함되어 있다. 자세한 내용은
+ [바운드 서비스 계정 토큰](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)을
+ 참조한다.
- `RotateKubeletClientCertificate`: kubelet에서 클라이언트 TLS 인증서의 로테이션을 활성화한다.
자세한 내용은 [kubelet 구성](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)을 참고한다.
- `RotateKubeletServerCertificate`: kubelet에서 서버 TLS 인증서의 로테이션을 활성화한다.
- 자세한 내용은 [kubelet 구성](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)을 참고한다.
-- `RunAsGroup`: 컨테이너의 init 프로세스에 설정된 기본 그룹 ID 제어를 활성화한다.
-- `RuntimeClass`: 컨테이너 런타임 구성을 선택하기 위해 [런타임클래스(RuntimeClass)](/ko/docs/concepts/containers/runtime-class/) 기능을 활성화한다.
-- `ScheduleDaemonSetPods`: 데몬셋(DaemonSet) 컨트롤러 대신 기본 스케줄러로 데몬셋 파드를 스케줄링할 수 있다.
-- `SCTPSupport`: 파드, 서비스, 엔드포인트, 엔드포인트슬라이스 및 네트워크폴리시 정의에서 _SCTP_ `protocol` 값을 활성화한다.
-- `ServerSideApply`: API 서버에서 [SSA(Sever Side Apply)](/docs/reference/using-api/server-side-apply/) 경로를 활성화한다.
-- `ServiceAccountIssuerDiscovery`: API 서버에서 서비스 어카운트 발행자에 대해 OIDC 디스커버리 엔드포인트(발급자 및 JWKS URL)를 활성화한다. 자세한 내용은 [파드의 서비스 어카운트 구성](/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery)을 참고한다.
+- `RunAsGroup`: 컨테이너의 init 프로세스에 설정된 기본 그룹 ID 제어를
+ 활성화한다.
+- `RuntimeClass`: 컨테이너 런타임 구성을 선택하기 위해 [런타임클래스(RuntimeClass)](/ko/docs/concepts/containers/runtime-class/)
+ 기능을 활성화한다.
+- `ScheduleDaemonSetPods`: 데몬셋(DaemonSet) 컨트롤러 대신 기본 스케줄러로 데몬셋 파드를
+ 스케줄링할 수 있다.
+- `SCTPSupport`: 파드, 서비스, 엔드포인트, 엔드포인트슬라이스 및 네트워크폴리시 정의에서
+ _SCTP_ `protocol` 값을 활성화한다.
+- `ServerSideApply`: API 서버에서 [SSA(Sever Side Apply)](/docs/reference/using-api/server-side-apply/)
+ 경로를 활성화한다.
+- `ServiceAccountIssuerDiscovery`: API 서버에서 서비스 어카운트 발행자에 대해 OIDC 디스커버리 엔드포인트(발급자 및
+ JWKS URL)를 활성화한다. 자세한 내용은
+ [파드의 서비스 어카운트 구성](/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery)을
+ 참고한다.
- `ServiceAppProtocol`: 서비스와 엔드포인트에서 `AppProtocol` 필드를 활성화한다.
-- `ServiceLBNodePortControl`: 서비스에서`spec.allocateLoadBalancerNodePorts` 필드를 활성화한다.
+- `ServiceLBNodePortControl`: 서비스에서`spec.allocateLoadBalancerNodePorts` 필드를
+ 활성화한다.
- `ServiceLoadBalancerFinalizer`: 서비스 로드 밸런서에 대한 Finalizer 보호를 활성화한다.
-- `ServiceNodeExclusion`: 클라우드 제공자가 생성한 로드 밸런서에서 노드를 제외할 수 있다.
- "`alpha.service-controller.kubernetes.io/exclude-balancer`" 키 또는 `node.kubernetes.io/exclude-from-external-load-balancers` 로 레이블이 지정된 경우 노드를 제외할 수 있다.
-- `ServiceTopology`: 서비스가 클러스터의 노드 토폴로지를 기반으로 트래픽을 라우팅할 수 있도록 한다. 자세한 내용은 [서비스토폴로지(ServiceTopology)](/ko/docs/concepts/services-networking/service-topology/)를 참고한다.
-- `SizeMemoryBackedVolumes`: kubelet 지원을 사용하여 메모리 백업 볼륨의 크기를 조정한다. 자세한 내용은 [volumes](/ko/docs/concepts/storage/volumes)를 참조한다.
-- `SetHostnameAsFQDN`: 전체 주소 도메인 이름(FQDN)을 파드의 호스트 이름으로 설정하는 기능을 활성화한다. [파드의 `setHostnameAsFQDN` 필드](/ko/docs/concepts/services-networking/dns-pod-service/#파드의-sethostnameasfqdn-필드)를 참고한다.
-- `StartupProbe`: kubelet에서 [스타트업](/ko/docs/concepts/workloads/pods/pod-lifecycle/#언제-스타트업-프로브를-사용해야-하는가) 프로브를 활성화한다.
+- `ServiceNodeExclusion`: 클라우드 제공자가 생성한 로드 밸런서에서 노드를
+ 제외할 수 있다. "`node.kubernetes.io/exclude-from-external-load-balancers`"로
+ 레이블이 지정된 경우 노드를 제외할 수 있다.
+- `ServiceTopology`: 서비스가 클러스터의 노드 토폴로지를 기반으로 트래픽을 라우팅할 수
+ 있도록 한다. 자세한 내용은
+ [서비스토폴로지(ServiceTopology)](/ko/docs/concepts/services-networking/service-topology/)를
+ 참고한다.
+- `SizeMemoryBackedVolumes`: kubelet 지원을 사용하여 메모리 백업 볼륨의 크기를 조정한다.
+ 자세한 내용은 [volumes](/ko/docs/concepts/storage/volumes)를 참조한다.
+- `SetHostnameAsFQDN`: 전체 주소 도메인 이름(FQDN)을 파드의 호스트 이름으로
+ 설정하는 기능을 활성화한다.
+ [파드의 `setHostnameAsFQDN` 필드](/ko/docs/concepts/services-networking/dns-pod-service/#파드의-sethostnameasfqdn-필드)를 참고한다.
+- `StartupProbe`: kubelet에서
+ [스타트업](/ko/docs/concepts/workloads/pods/pod-lifecycle/#언제-스타트업-프로브를-사용해야-하는가)
+ 프로브를 활성화한다.
- `StorageObjectInUseProtection`: 퍼시스턴트볼륨 또는 퍼시스턴트볼륨클레임 오브젝트가 여전히
사용 중인 경우 삭제를 연기한다.
-- `StorageVersionHash`: API 서버가 디스커버리에서 스토리지 버전 해시를 노출하도록 허용한다.
+- `StorageVersionAPI`: [스토리지 버전 API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#storageversion-v1alpha1-internal-apiserver-k8s-io)를
+ 활성화한다.
+- `StorageVersionHash`: API 서버가 디스커버리에서 스토리지 버전 해시를 노출하도록
+ 허용한다.
- `StreamingProxyRedirects`: 스트리밍 요청을 위해 백엔드(kubelet)에서 리디렉션을
가로채서 따르도록 API 서버에 지시한다.
스트리밍 요청의 예로는 `exec`, `attach` 및 `port-forward` 요청이 있다.
- `SupportIPVSProxyMode`: IPVS를 사용하여 클러스터 내 서비스 로드 밸런싱을 제공한다.
자세한 내용은 [서비스 프록시](/ko/docs/concepts/services-networking/service/#가상-ip와-서비스-프록시)를 참고한다.
- `SupportPodPidsLimit`: 파드의 PID 제한을 지원한다.
-- `SupportNodePidsLimit`: 노드에서 PID 제한 지원을 활성화한다. `--system-reserved` 및 `--kube-reserved` 옵션의 `pid=` 매개 변수를 지정하여 지정된 수의 프로세스 ID가 시스템 전체와 각각 쿠버네티스 시스템 데몬에 대해 예약되도록 할 수 있다.
-- `Sysctls`: 각 파드에 설정할 수 있는 네임스페이스 커널 파라미터(sysctl)를 지원한다.
- 자세한 내용은 [sysctl](/docs/tasks/administer-cluster/sysctl-cluster/)을 참고한다.
-- `TaintBasedEvictions`: 노드의 테인트(taint) 및 파드의 톨러레이션(toleration)을 기반으로 노드에서 파드를 축출할 수 있다.
- 자세한 내용은 [테인트와 톨러레이션](/ko/docs/concepts/scheduling-eviction/taint-and-toleration/)을 참고한다.
-- `TaintNodesByCondition`: [노드 컨디션](/ko/docs/concepts/architecture/nodes/#condition)을 기반으로 자동 테인트 노드를 활성화한다.
+- `SupportNodePidsLimit`: 노드에서 PID 제한 지원을 활성화한다.
+ `--system-reserved` 및 `--kube-reserved` 옵션의 `pid=`
+ 파라미터를 지정하여 지정된 수의 프로세스 ID가
+ 시스템 전체와 각각 쿠버네티스 시스템 데몬에 대해 예약되도록
+ 할 수 있다.
+- `Sysctls`: 각 파드에 설정할 수 있는 네임스페이스 커널
+ 파라미터(sysctl)를 지원한다. 자세한 내용은
+ [sysctl](/docs/tasks/administer-cluster/sysctl-cluster/)을 참고한다.
+- `TTLAfterFinished`: [TTL 컨트롤러](/ko/docs/concepts/workloads/controllers/ttlafterfinished/)가
+ 실행이 끝난 후 리소스를 정리하도록
+ 허용한다.
+- `TaintBasedEvictions`: 노드의 테인트(taint) 및 파드의 톨러레이션(toleration)을 기반으로
+ 노드에서 파드를 축출할 수 있다.
+ 자세한 내용은 [테인트와 톨러레이션](/ko/docs/concepts/scheduling-eviction/taint-and-toleration/)을
+ 참고한다.
+- `TaintNodesByCondition`: [노드 컨디션](/ko/docs/concepts/architecture/nodes/#condition)을
+ 기반으로 자동 테인트 노드를 활성화한다.
- `TokenRequest`: 서비스 어카운트 리소스에서 `TokenRequest` 엔드포인트를 활성화한다.
-- `TokenRequestProjection`: [`projected` 볼륨](/ko/docs/concepts/storage/volumes/#projected)을 통해 서비스 어카운트
- 토큰을 파드에 주입할 수 있다.
-- `TopologyManager`: 쿠버네티스의 다른 컴포넌트에 대한 세분화된 하드웨어 리소스 할당을 조정하는 메커니즘을 활성화한다. [노드의 토폴로지 관리 정책 제어](/docs/tasks/administer-cluster/topology-manager/)를 참고한다.
-- `TTLAfterFinished`: [TTL 컨트롤러](/ko/docs/concepts/workloads/controllers/ttlafterfinished/)가 실행이 끝난 후 리소스를 정리하도록 허용한다.
+- `TokenRequestProjection`: [`projected` 볼륨](/ko/docs/concepts/storage/volumes/#projected)을 통해
+ 서비스 어카운트 토큰을 파드에 주입할 수 있다.
+- `TopologyManager`: 쿠버네티스의 다른 컴포넌트에 대한 세분화된 하드웨어 리소스
+ 할당을 조정하는 메커니즘을 활성화한다.
+ [노드의 토폴로지 관리 정책 제어](/docs/tasks/administer-cluster/topology-manager/)를 참고한다.
- `VolumePVCDataSource`: 기존 PVC를 데이터 소스로 지정하는 기능을 지원한다.
- `VolumeScheduling`: 볼륨 토폴로지 인식 스케줄링을 활성화하고
퍼시스턴트볼륨클레임(PVC) 바인딩이 스케줄링 결정을 인식하도록 한다. 또한
`PersistentLocalVolumes` 기능 게이트와 함께 사용될 때
[`local`](/ko/docs/concepts/storage/volumes/#local) 볼륨 유형을 사용할 수 있다.
- `VolumeSnapshotDataSource`: 볼륨 스냅샷 데이터 소스 지원을 활성화한다.
-- `VolumeSubpathEnvExpansion`: 환경 변수를 `subPath`로 확장하기 위해 `subPathExpr` 필드를 활성화한다.
+- `VolumeSubpathEnvExpansion`: 환경 변수를 `subPath`로 확장하기 위해
+ `subPathExpr` 필드를 활성화한다.
+- `WarningHeaders`: API 응답에서 경고 헤더를 보낼 수 있다.
- `WatchBookmark`: 감시자 북마크(watch bookmark) 이벤트 지원을 활성화한다.
-- `WindowsGMSA`: 파드에서 컨테이너 런타임으로 GMSA 자격 증명 스펙을 전달할 수 있다.
-- `WindowsRunAsUserName` : 기본 사용자가 아닌(non-default) 사용자로 윈도우 컨테이너에서 애플리케이션을 실행할 수 있도록 지원한다.
- 자세한 내용은 [RunAsUserName 구성](/docs/tasks/configure-pod-container/configure-runasusername)을 참고한다.
- `WinDSR`: kube-proxy가 윈도우용 DSR 로드 밸런서를 생성할 수 있다.
- `WinOverlay`: kube-proxy가 윈도우용 오버레이 모드에서 실행될 수 있도록 한다.
+- `WindowsGMSA`: 파드에서 컨테이너 런타임으로 GMSA 자격 증명 스펙을 전달할 수 있다.
+- `WindowsRunAsUserName` : 기본 사용자가 아닌(non-default) 사용자로 윈도우 컨테이너에서
+ 애플리케이션을 실행할 수 있도록 지원한다. 자세한 내용은
+ [RunAsUserName 구성](/docs/tasks/configure-pod-container/configure-runasusername)을
+ 참고한다.
+- `WindowsEndpointSliceProxying`: 활성화되면, 윈도우에서 실행되는 kube-proxy는
+ 엔드포인트 대신 엔드포인트슬라이스를 기본 데이터 소스로 사용하여
+ 확장성과 성능을 향상시킨다.
+ [엔드포인트 슬라이스 활성화하기](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다.
## {{% heading "whatsnext" %}}
diff --git a/content/ko/docs/reference/glossary/api-group.md b/content/ko/docs/reference/glossary/api-group.md
index 0c27d3181e070..96f32bd9ce200 100644
--- a/content/ko/docs/reference/glossary/api-group.md
+++ b/content/ko/docs/reference/glossary/api-group.md
@@ -2,7 +2,7 @@
title: API 그룹(API Group)
id: api-group
date: 2019-09-02
-full_link: /ko/docs/concepts/overview/kubernetes-api/#api-groups
+full_link: /ko/docs/concepts/overview/kubernetes-api/#api-그룹과-버전-규칙
short_description: >
쿠버네티스 API의 연관된 경로들의 집합.
@@ -11,9 +11,9 @@ tags:
- fundamental
- architecture
---
-쿠버네티스 API의 연관된 경로들의 집합.
+쿠버네티스 API의 연관된 경로들의 집합.
API 서버의 구성을 변경하여 각 API 그룹을 활성화하거나 비활성화할 수 있다. 특정 리소스에 대한 경로를 비활성화하거나 활성화할 수도 있다. API 그룹을 사용하면 쿠버네티스 API를 더 쉽게 확장할 수 있다. API 그룹은 REST 경로 및 직렬화된 오브젝트의 `apiVersion` 필드에 지정된다.
-* 자세한 내용은 [API 그룹(/ko/docs/concepts/overview/kubernetes-api/#api-groups)을 참조한다.
+* 자세한 내용은 [API 그룹(/ko/docs/concepts/overview/kubernetes-api/#api-그룹과-버전-규칙)을 참조한다.
diff --git a/content/ko/docs/reference/glossary/quantity.md b/content/ko/docs/reference/glossary/quantity.md
new file mode 100644
index 0000000000000..450307841ad05
--- /dev/null
+++ b/content/ko/docs/reference/glossary/quantity.md
@@ -0,0 +1,33 @@
+---
+title: 수량(Quantity)
+id: quantity
+date: 2018-08-07
+full_link:
+short_description: >
+ SI 접미사를 사용하는 작거나 큰 숫자의 정수(whole-number) 표현.
+
+aka:
+tags:
+- core-object
+---
+ SI 접미사를 사용하는 작거나 큰 숫자의 정수(whole-number) 표현.
+
+
+
+수량은 SI 접미사가 포함된 간결한 정수 표기법을 통해서 작거나 큰 숫자를 표현한 것이다.
+분수는 밀리(milli) 단위로 표시되는 반면,
+큰 숫자는 킬로(kilo), 메가(mega), 또는 기가(giga)
+단위로 표시할 수 있다.
+
+
+예를 들어, 숫자 `1.5`는 `1500m`으로, 숫자 `1000`은 `1k`로, `1000000`은
+`1M`으로 표시할 수 있다. 또한, 이진 표기법 접미사도 명시 가능하므로,
+숫자 2048은 `2Ki`로 표기될 수 있다.
+
+허용되는 10진수(10의 거듭 제곱) 단위는 `m` (밀리), `k` (킬로, 의도적인 소문자),
+`M` (메가), `G` (기가), `T` (테라), `P` (페타),
+`E` (엑사)가 있다.
+
+허용되는 2진수(2의 거듭 제곱) 단위는 `Ki` (키비), `Mi` (메비), `Gi` (기비),
+`Ti` (테비), `Pi` (페비), `Ei` (엑비)가 있다.
+
diff --git a/content/ko/docs/reference/glossary/secret.md b/content/ko/docs/reference/glossary/secret.md
new file mode 100644
index 0000000000000..63637adc1a238
--- /dev/null
+++ b/content/ko/docs/reference/glossary/secret.md
@@ -0,0 +1,18 @@
+---
+title: 시크릿(Secret)
+id: secret
+date: 2018-04-12
+full_link: /ko/docs/concepts/configuration/secret/
+short_description: >
+ 비밀번호, OAuth 토큰 및 ssh 키와 같은 민감한 정보를 저장한다.
+
+aka:
+tags:
+- core-object
+- security
+---
+ 비밀번호, OAuth 토큰 및 ssh 키와 같은 민감한 정보를 저장한다.
+
+
+
+민감한 정보를 사용하는 방식에 대해 더 세밀하게 제어할 수 있으며, 유휴 상태의 [암호화](/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted)를 포함하여 우발적인 노출 위험을 줄인다. {{< glossary_tooltip text="파드(Pod)" term_id="pod" >}}는 시크릿을 마운트된 볼륨의 파일로 참조하거나, 파드의 이미지를 풀링하는 kubelet이 시크릿을 참조한다. 시크릿은 기밀 데이터에 적합하고 [컨피그맵](/docs/tasks/configure-pod-container/configure-pod-configmap/)은 기밀이 아닌 데이터에 적합하다.
diff --git a/content/ko/docs/reference/glossary/storage-class.md b/content/ko/docs/reference/glossary/storage-class.md
new file mode 100644
index 0000000000000..63bd655b68d26
--- /dev/null
+++ b/content/ko/docs/reference/glossary/storage-class.md
@@ -0,0 +1,20 @@
+---
+title: 스토리지 클래스(Storage Class)
+id: storageclass
+date: 2018-04-12
+full_link: /ko/docs/concepts/storage/storage-classes
+short_description: >
+ 스토리지클래스는 관리자가 사용 가능한 다양한 스토리지 유형을 설명할 수 있는 방법을 제공한다.
+
+aka:
+tags:
+- core-object
+- storage
+---
+ 스토리지클래스는 관리자가 사용 가능한 다양한 스토리지 유형을 설명할 수 있는 방법을 제공한다.
+
+
+
+스토리지 클래스는 서비스 품질 수준, 백업 정책 혹은 클러스터 관리자가 결정한 임의의 정책에 매핑할 수 있다. 각 스토리지클래스에는 클래스에 속한 {{< glossary_tooltip text="퍼시스턴트 볼륨(Persistent Volume)" term_id="persistent-volume" >}}을 동적으로 프로비저닝해야 할 때 사용되는 `provisioner`, `parameters` 및 `reclaimPolicy` 필드가 있다. 사용자는 스토리지클래스 객체의 이름을 사용하여 특정 클래스를 요청할 수 있다.
+
+
diff --git a/content/ko/docs/setup/production-environment/container-runtimes.md b/content/ko/docs/setup/production-environment/container-runtimes.md
index 4b749ab7e51bc..f51b5f5eefb45 100644
--- a/content/ko/docs/setup/production-environment/container-runtimes.md
+++ b/content/ko/docs/setup/production-environment/container-runtimes.md
@@ -122,7 +122,7 @@ sudo apt-get update && sudo apt-get install -y containerd.io
```shell
# containerd 구성
sudo mkdir -p /etc/containerd
-sudo containerd config default | sudo tee /etc/containerd/config.toml
+containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
@@ -140,7 +140,7 @@ sudo apt-get update && sudo apt-get install -y containerd
```shell
# containerd 구성
sudo mkdir -p /etc/containerd
-sudo containerd config default | sudo tee /etc/containerd/config.toml
+containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
@@ -210,7 +210,7 @@ sudo yum update -y && sudo yum install -y containerd.io
```shell
## containerd 구성
sudo mkdir -p /etc/containerd
-sudo containerd config default | sudo tee /etc/containerd/config.toml
+containerd config default | sudo tee /etc/containerd/config.toml
```
```shell
diff --git a/content/ko/docs/setup/release/version-skew-policy.md b/content/ko/docs/setup/release/version-skew-policy.md
index feb675f8ba6bc..76ff7504fd032 100644
--- a/content/ko/docs/setup/release/version-skew-policy.md
+++ b/content/ko/docs/setup/release/version-skew-policy.md
@@ -1,11 +1,18 @@
---
+
+
+
+
+
+
+
title: 쿠버네티스 버전 및 버전 차이(skew) 지원 정책
content_type: concept
weight: 30
---
-이 문서는 다양한 쿠버네티스 구성 요소 간에 지원되는 최대 버전 차이를 설명한다.
+이 문서는 다양한 쿠버네티스 구성 요소 간에 지원되는 최대 버전 차이를 설명한다.
특정 클러스터 배포 도구는 버전 차이에 대한 추가적인 제한을 설정할 수 있다.
@@ -19,14 +26,14 @@ weight: 30
쿠버네티스 프로젝트는 최근 세 개의 마이너 릴리스 ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}) 에 대한 릴리스 분기를 유지한다. 쿠버네티스 1.19 이상은 약 1년간의 패치 지원을 받는다. 쿠버네티스 1.18 이상은 약 9개월의 패치 지원을 받는다.
-보안 수정사항을 포함한 해당 수정사항은 심각도와 타당성에 따라 세 개의 릴리스 브랜치로 백포트(backport) 될 수 있다.
+보안 수정사항을 포함한 해당 수정사항은 심각도와 타당성에 따라 세 개의 릴리스 브랜치로 백포트(backport) 될 수 있다.
패치 릴리스는 각 브랜치별로 [정기적인 주기](https://git.k8s.io/sig-release/releases/patch-releases.md#cadence)로 제공하며, 필요한 경우 추가 긴급 릴리스도 추가한다.
[릴리스 관리자](https://git.k8s.io/sig-release/release-managers.md) 그룹이 이러한 결정 권한을 가진다.
자세한 내용은 쿠버네티스 [패치 릴리스](https://git.k8s.io/sig-release/releases/patch-releases.md) 페이지를 참조한다.
-## 지원되는 버전 차이
+## 지원되는 버전 차이
### kube-apiserver
@@ -133,6 +140,11 @@ HA 클러스터의 `kube-apiserver` 인스턴스 간에 버전 차이가 있으
필요에 따라서 `kubelet` 인스턴스를 **{{< skew latestVersion >}}** 으로 업그레이드할 수 있다(또는 **{{< skew prevMinorVersion >}}** 아니면 **{{< skew oldestMinorVersion >}}** 으로 유지할 수 있음).
+{{< note >}}
+`kubelet` 마이너 버전 업그레이드를 수행하기 전에, 해당 노드의 파드를 [드레인(drain)](/docs/tasks/administer-cluster/safely-drain-node/)해야 한다.
+인플레이스(In-place) 마이너 버전 `kubelet` 업그레이드는 지원되지 않는다.
+{{ note >}}
+
{{< warning >}}
클러스터 안의 `kubelet` 인스턴스를 `kube-apiserver`의 버전보다 2단계 낮은 버전으로 실행하는 것을 권장하지 않는다:
diff --git a/content/ko/docs/tasks/tools/install-kubectl.md b/content/ko/docs/tasks/tools/install-kubectl.md
index 9d80451e1be1b..70a6a604094b9 100644
--- a/content/ko/docs/tasks/tools/install-kubectl.md
+++ b/content/ko/docs/tasks/tools/install-kubectl.md
@@ -1,4 +1,6 @@
---
+
+
title: kubectl 설치 및 설정
content_type: task
weight: 10
@@ -30,33 +32,73 @@ kubectl을 사용하여 애플리케이션을 배포하고, 클러스터 리소
1. 다음 명령으로 최신 릴리스를 다운로드한다.
- ```
- curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
- ```
+ ```bash
+ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
+ ```
- 특정 버전을 다운로드하려면, `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다.
+ {{< note >}}
+특정 버전을 다운로드하려면, `$(curl -L -s https://dl.k8s.io/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다.
- 예를 들어, 리눅스에서 버전 {{< param "fullversion" >}}을 다운로드하려면, 다음을 입력한다.
- ```
- curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
- ```
+예를 들어, 리눅스에서 버전 {{< param "fullversion" >}}을 다운로드하려면, 다음을 입력한다.
+
+ ```bash
+ curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
+ ```
+ {{< /note >}}
-2. kubectl 바이너리를 실행 가능하게 만든다.
+1. 바이너리를 검증한다. (선택 사항)
- ```
- chmod +x ./kubectl
- ```
+ kubectl 체크섬(checksum) 파일을 다운로드한다.
-3. 바이너리를 PATH가 설정된 디렉터리로 옮긴다.
+ ```bash
+ curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
+ ```
- ```
- sudo mv ./kubectl /usr/local/bin/kubectl
- ```
-4. 설치한 버전이 최신 버전인지 확인한다.
+ kubectl 바이너리를 체크섬 파일을 통해 검증한다.
- ```
- kubectl version --client
- ```
+ ```bash
+ echo "$(}}
+ 동일한 버전의 바이너리와 체크섬을 다운로드한다.
+ {{< /note >}}
+
+1. kubectl 설치
+
+ ```bash
+ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
+ ```
+
+ {{< note >}}
+ 대상 시스템에 root 접근 권한을 가지고 있지 않더라도, `~/.local/bin` 디렉터리에 kubectl을 설치할 수 있다.
+
+ ```bash
+ mkdir -p ~/.local/bin/kubectl
+ mv ./kubectl ~/.local/bin/kubectl
+ # 그리고 ~/.local/bin/kubectl을 $PATH에 추가
+ ```
+
+ {{< /note >}}
+
+1. 설치한 버전이 최신인지 확인한다.
+
+ ```bash
+ kubectl version --client
+ ```
### 기본 패키지 관리 도구를 사용하여 설치
@@ -117,29 +159,65 @@ kubectl version --client
1. 최신 릴리스를 다운로드한다.
```bash
- curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"
+ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
```
- 특정 버전을 다운로드하려면, `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다.
+ {{< note >}}
+ 특정 버전을 다운로드하려면, `$(curl -L -s https://dl.k8s.io/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다.
예를 들어, macOS에서 버전 {{< param "fullversion" >}}을 다운로드하려면, 다음을 입력한다.
+
```bash
- curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
- ```
+ curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
+ ```
+
+ {{< /note >}}
+
+1. 바이너리를 검증한다. (선택 사항)
+
+ kubectl 체크섬 파일을 다운로드한다.
+
+ ```bash
+ curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256"
+ ```
+
+ kubectl 바이너리를 체크섬 파일을 통해 검증한다.
+
+ ```bash
+ echo "$(}}
+ 동일한 버전의 바이너리와 체크섬을 다운로드한다.
+ {{< /note >}}
+
+1. kubectl 바이너리를 실행 가능하게 한다.
```bash
chmod +x ./kubectl
```
-3. 바이너리를 PATH가 설정된 디렉터리로 옮긴다.
+1. kubectl 바이너리를 시스템 `PATH` 의 파일 위치로 옮긴다.
```bash
- sudo mv ./kubectl /usr/local/bin/kubectl
+ sudo mv ./kubectl /usr/local/bin/kubectl && \
+ sudo chown root: /usr/local/bin/kubectl
```
-4. 설치한 버전이 최신 버전인지 확인한다.
+1. 설치한 버전이 최신 버전인지 확인한다.
```bash
kubectl version --client
@@ -161,7 +239,7 @@ macOS에서 [Homebrew](https://brew.sh/) 패키지 관리자를 사용하는 경
brew install kubernetes-cli
```
-2. 설치한 버전이 최신 버전인지 확인한다.
+1. 설치한 버전이 최신 버전인지 확인한다.
```bash
kubectl version --client
@@ -178,7 +256,7 @@ macOS에서 [Macports](https://macports.org/) 패키지 관리자를 사용하
sudo port install kubectl
```
-2. 설치한 버전이 최신 버전인지 확인한다.
+1. 설치한 버전이 최신 버전인지 확인한다.
```bash
kubectl version --client
@@ -188,30 +266,55 @@ macOS에서 [Macports](https://macports.org/) 패키지 관리자를 사용하
### 윈도우에서 curl을 사용하여 kubectl 바이너리 설치
-1. [이 링크](https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe)에서 최신 릴리스 {{< param "fullversion" >}}을 다운로드한다.
+1. [최신 릴리스 {{< param "fullversion" >}}](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe)를 다운로드한다.
또는 `curl` 을 설치한 경우, 다음 명령을 사용한다.
- ```bash
- curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
+ ```powershell
+ curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
```
- 최신의 안정 버전(예: 스크립팅을 위한)을 찾으려면, [https://storage.googleapis.com/kubernetes-release/release/stable.txt](https://storage.googleapis.com/kubernetes-release/release/stable.txt)를 참고한다.
+ {{< note >}}
+ 최신의 안정 버전(예: 스크립팅을 위한)을 찾으려면, [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt)를 참고한다.
+ {{< /note >}}
-2. 바이너리를 PATH가 설정된 디렉터리에 추가한다.
+1. 바이너리를 검증한다. (선택 사항)
-3. `kubectl` 의 버전이 다운로드한 버전과 같은지 확인한다.
+ kubectl 체크섬 파일을 다운로드한다.
- ```bash
+ ```powershell
+ curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe.sha256
+ ```
+
+ kubectl 바이너리를 체크섬 파일을 통해 검증한다.
+
+ - 수동으로 `CertUtil` 의 출력과 다운로드한 체크섬 파일을 비교하기 위해서 커맨드 프롬프트를 사용한다.
+
+ ```cmd
+ CertUtil -hashfile kubectl.exe SHA256
+ type kubectl.exe.sha256
+ ```
+
+ - `-eq` 연산자를 통해 `True` 또는 `False` 결과를 얻는 자동 검증을 위해서 PowerShell을 사용한다.
+
+ ```powershell
+ $($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256)
+ ```
+
+1. 바이너리를 `PATH` 가 설정된 디렉터리에 추가한다.
+
+1. `kubectl` 의 버전이 다운로드한 버전과 같은지 확인한다.
+
+ ```cmd
kubectl version --client
```
{{< note >}}
-[윈도우용 도커 데스크톱](https://docs.docker.com/docker-for-windows/#kubernetes)은 자체 버전의 `kubectl` 을 PATH에 추가한다.
-도커 데스크톱을 이전에 설치한 경우, 도커 데스크톱 설치 프로그램에서 추가한 PATH 항목 앞에 PATH 항목을 배치하거나 도커 데스크톱의 `kubectl` 을 제거해야 할 수도 있다.
+[윈도우용 도커 데스크톱](https://docs.docker.com/docker-for-windows/#kubernetes)은 자체 버전의 `kubectl` 을 `PATH` 에 추가한다.
+도커 데스크톱을 이전에 설치한 경우, 도커 데스크톱 설치 프로그램에서 추가한 `PATH` 항목 앞에 `PATH` 항목을 배치하거나 도커 데스크톱의 `kubectl` 을 제거해야 할 수도 있다.
{{< /note >}}
-### PSGallery에서 Powershell로 설치
+### PSGallery에서 PowerShell로 설치
윈도우에서 [Powershell Gallery](https://www.powershellgallery.com/) 패키지 관리자를 사용하는 경우, Powershell로 kubectl을 설치하고 업데이트할 수 있다.
@@ -223,12 +326,12 @@ macOS에서 [Macports](https://macports.org/) 패키지 관리자를 사용하
```
{{< note >}}
- `DownloadLocation` 을 지정하지 않으면, `kubectl` 은 사용자의 임시 디렉터리에 설치된다.
+ `DownloadLocation` 을 지정하지 않으면, `kubectl` 은 사용자의 `temp` 디렉터리에 설치된다.
{{< /note >}}
설치 프로그램은 `$HOME/.kube` 를 생성하고 구성 파일을 작성하도록 지시한다.
-2. 설치한 버전이 최신 버전인지 확인한다.
+1. 설치한 버전이 최신 버전인지 확인한다.
```powershell
kubectl version --client
@@ -256,32 +359,32 @@ macOS에서 [Macports](https://macports.org/) 패키지 관리자를 사용하
{{< /tabs >}}
-2. 설치한 버전이 최신 버전인지 확인한다.
+1. 설치한 버전이 최신 버전인지 확인한다.
```powershell
kubectl version --client
```
-3. 홈 디렉터리로 이동한다.
+1. 홈 디렉터리로 이동한다.
```powershell
# cmd.exe를 사용한다면, 다음을 실행한다. cd %USERPROFILE%
cd ~
```
-4. `.kube` 디렉터리를 생성한다.
+1. `.kube` 디렉터리를 생성한다.
```powershell
mkdir .kube
```
-5. 금방 생성한 `.kube` 디렉터리로 이동한다.
+1. 금방 생성한 `.kube` 디렉터리로 이동한다.
```powershell
cd .kube
```
-6. 원격 쿠버네티스 클러스터를 사용하도록 kubectl을 구성한다.
+1. 원격 쿠버네티스 클러스터를 사용하도록 kubectl을 구성한다.
```powershell
New-Item config -type file
@@ -297,13 +400,13 @@ kubectl을 Google Cloud SDK의 일부로 설치할 수 있다.
1. [Google Cloud SDK](https://cloud.google.com/sdk/)를 설치한다.
-2. `kubectl` 설치 명령을 실행한다.
+1. `kubectl` 설치 명령을 실행한다.
```shell
gcloud components install kubectl
```
-3. 설치한 버전이 최신 버전인지 확인한다.
+1. 설치한 버전이 최신 버전인지 확인한다.
```shell
kubectl version --client
@@ -381,11 +484,13 @@ source /usr/share/bash-completion/bash_completion
```bash
echo 'source <(kubectl completion bash)' >>~/.bashrc
```
+
- 완성 스크립트를 `/etc/bash_completion.d` 디렉터리에 추가한다.
```bash
kubectl completion bash >/etc/bash_completion.d/kubectl
```
+
kubectl에 대한 앨리어스(alias)가 있는 경우, 해당 앨리어스로 작업하도록 셸 완성을 확장할 수 있다.
```bash
@@ -466,7 +571,6 @@ export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
```bash
echo 'source <(kubectl completion bash)' >>~/.bash_profile
-
```
- 완성 스크립트를 `/usr/local/etc/bash_completion.d` 디렉터리에 추가한다.
diff --git a/content/pt/docs/tutorials/_index.md b/content/pt/docs/tutorials/_index.md
index a488f84388248..bc39fd817a79b 100644
--- a/content/pt/docs/tutorials/_index.md
+++ b/content/pt/docs/tutorials/_index.md
@@ -21,7 +21,7 @@ Antes de iniciar um tutorial, é interessante que vocẽ salve a página de [Glo
* [Introdução ao Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) é um curso gratuíto da edX que te guia no entendimento do Kubernetes, seus conceitos, bem como na execução de tarefas mais simples.
-* [Hello Minikube](/docs/tutorials/hello-minikube/) é um "Hello World" que te permite testar rapidamente o Kubernetes em sua estação com o uso do Minikube
+* [Olá, Minikube!](/pt/docs/tutorials/hello-minikube/) é um "Hello World" que te permite testar rapidamente o Kubernetes em sua estação com o uso do Minikube
## Configuração
diff --git a/content/pt/docs/tutorials/hello-minikube.md b/content/pt/docs/tutorials/hello-minikube.md
new file mode 100644
index 0000000000000..0db5d20ddcea5
--- /dev/null
+++ b/content/pt/docs/tutorials/hello-minikube.md
@@ -0,0 +1,258 @@
+---
+title: Olá, Minikube!
+content_type: tutorial
+weight: 5
+menu:
+ main:
+ title: "Iniciar"
+ weight: 10
+ post: >
+ Pronto para meter a mão na massa? Vamos criar um cluster Kubernetes simples e executar uma aplicação exemplo.
+card:
+ name: tutorials
+ weight: 10
+---
+
+
+
+Este tutorial mostra como executar uma aplicação exemplo no Kubernetes utilizando o [Minikube](https://minikube.sigs.k8s.io) e o [Katacoda](https://www.katacoda.com). O Katacoda disponibiliza um ambiente Kubernetes gratuito e acessível via navegador.
+
+{{< note >}}
+Você também consegue seguir os passos desse tutorial instalando o Minikube localmente. Para instruções de instalação, acesse: [iniciando com minikube](https://minikube.sigs.k8s.io/docs/start/).
+{{< /note >}}
+
+## Objetivos
+
+* Instalar uma aplicação exemplo no minikube.
+* Executar a aplicação.
+* Visualizar os logs da aplicação.
+
+## Antes de você iniciar
+
+Este tutorial disponibiliza uma imagem de contêiner que utiliza o NGINX para retornar todas as requisições.
+
+
+
+## Criando um cluster do Minikube
+
+1. Clique no botão abaixo **para iniciar o terminal do Katacoda**.
+
+ {{< kat-button >}}
+
+{{< note >}}
+Se você instalou o Minikube localmente, execute: `minikube start`.
+{{< /note >}}
+
+2. Abra o painel do Kubernetes em um navegador:
+
+ ```shell
+ minikube dashboard
+ ```
+
+3. Apenas no ambiente do Katacoda: Na parte superior do terminal, clique em **Preview Port 30000**.
+
+## Criando um Deployment
+
+Um [*Pod*](/docs/concepts/workloads/pods/) Kubernetes consiste em um ou mais contêineres agrupados para fins de administração e gerenciamento de rede. O Pod desse tutorial possui apenas um contêiner. Um [*Deployment*](/docs/concepts/workloads/controllers/deployment/) Kubernetes verifica a saúde do seu Pod e reinicia o contêiner do Pod caso o mesmo seja finalizado. Deployments são a maneira recomendada de gerenciar a criação e escalonamento dos Pods.
+
+1. Usando o comando `kubectl create` para criar um Deployment que gerencia um Pod. O Pod executa um contêiner baseado na imagem docker disponibilizada.
+
+ ```shell
+ kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
+ ```
+
+2. Visualizando o Deployment:
+
+ ```shell
+ kubectl get deployments
+ ```
+
+ A saída será semelhante a:
+
+ ```
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ hello-node 1/1 1 1 1m
+ ```
+
+3. Visualizando o Pod:
+
+ ```shell
+ kubectl get pods
+ ```
+
+ A saída será semelhante a:
+
+ ```
+ NAME READY STATUS RESTARTS AGE
+ hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m
+ ```
+
+4. Visualizando os eventos do cluster:
+
+ ```shell
+ kubectl get events
+ ```
+
+5. Visualizando a configuração do `kubectl`:
+
+ ```shell
+ kubectl config view
+ ```
+
+{{< note >}}
+Para mais informações sobre o comando `kubectl`, veja o [kubectl overview](/docs/reference/kubectl/overview/).
+{{< /note >}}
+
+## Criando um serviço
+
+Por padrão, um Pod só é acessível utilizando o seu endereço IP interno no cluster Kubernetes. Para dispobiblilizar o contêiner `hello-node` fora da rede virtual do Kubernetes, você deve expor o Pod como um [*serviço*](/docs/concepts/services-networking/service/) Kubernetes.
+
+1. Expondo o Pod usando o comando `kubectl expose`:
+
+ ```shell
+ kubectl expose deployment hello-node --type=LoadBalancer --port=8080
+ ```
+
+ O parâmetro `--type=LoadBalancer` indica que você deseja expor o seu serviço fora do cluster Kubernetes.
+
+ A aplicação dentro da imagem `k8s.gcr.io/echoserver` "escuta" apenas na porta TCP 8080. Se você usou
+ `kubectl expose` para expor uma porta diferente, os clientes não conseguirão se conectar a essa outra porta.
+
+2. Visualizando o serviço que você acabou de criar:
+
+ ```shell
+ kubectl get services
+ ```
+
+ A saída será semelhante a:
+
+ ```
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ hello-node LoadBalancer 10.108.144.78 8080:30369/TCP 21s
+ kubernetes ClusterIP 10.96.0.1 443/TCP 23m
+ ```
+
+ Em provedores de Cloud que fornecem serviços de balanceamento de carga para o Kubernetes, um IP externo seria provisionado para acessar o serviço. No Minikube, o tipo `LoadBalancer` torna o serviço acessível por meio do comando `minikube service`.
+
+3. Executar o comando a seguir:
+
+ ```shell
+ minikube service hello-node
+ ```
+
+4. (**Apenas no ambiente do Katacoda**) Clicar no sinal de mais e então clicar em **Select port to view on Host 1**.
+
+5. (**Apenas no ambiente do Katacoda**) Observe o número da porta com 5 dígitos exibido ao lado de `8080` na saída do serviço. Este número de porta é gerado aleatoriamente e pode ser diferente para você. Digite seu número na caixa de texto do número da porta e clique em **Display Port**. Usando o exemplo anterior, você digitaria `30369`.
+
+Isso abre uma janela do navegador, acessa o seu aplicativo e mostra o retorno da requisição.
+
+## Habilitando Complementos (addons)
+
+O Minikube inclui um conjunto integrado de {{< glossary_tooltip text="complementos" term_id="addons" >}} que podem ser habilitados, desabilitados e executados no ambiente Kubernetes local.
+
+1. Listando os complementos suportados atualmente:
+
+ ```shell
+ minikube addons list
+ ```
+
+ A saída será semelhante a:
+
+ ```
+ addon-manager: enabled
+ dashboard: enabled
+ default-storageclass: enabled
+ efk: disabled
+ freshpod: disabled
+ gvisor: disabled
+ helm-tiller: disabled
+ ingress: disabled
+ ingress-dns: disabled
+ logviewer: disabled
+ metrics-server: disabled
+ nvidia-driver-installer: disabled
+ nvidia-gpu-device-plugin: disabled
+ registry: disabled
+ registry-creds: disabled
+ storage-provisioner: enabled
+ storage-provisioner-gluster: disabled
+ ```
+
+2. Habilitando um complemento, por exemplo, `metrics-server`:
+
+ ```shell
+ minikube addons enable metrics-server
+ ```
+
+ A saída será semelhante a:
+
+ ```
+ metrics-server was successfully enabled
+ ```
+
+3. Visualizando os Pods e os Serviços que você acabou de criar:
+
+ ```shell
+ kubectl get pod,svc -n kube-system
+ ```
+
+ A saída será semelhante a:
+
+ ```
+ NAME READY STATUS RESTARTS AGE
+ pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m
+ pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m
+ pod/metrics-server-67fb648c5 1/1 Running 0 26s
+ pod/etcd-minikube 1/1 Running 0 34m
+ pod/influxdb-grafana-b29w8 2/2 Running 0 26s
+ pod/kube-addon-manager-minikube 1/1 Running 0 34m
+ pod/kube-apiserver-minikube 1/1 Running 0 34m
+ pod/kube-controller-manager-minikube 1/1 Running 0 34m
+ pod/kube-proxy-rnlps 1/1 Running 0 34m
+ pod/kube-scheduler-minikube 1/1 Running 0 34m
+ pod/storage-provisioner 1/1 Running 0 34m
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ service/metrics-server ClusterIP 10.96.241.45 80/TCP 26s
+ service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 34m
+ service/monitoring-grafana NodePort 10.99.24.54 80:30002/TCP 26s
+ service/monitoring-influxdb ClusterIP 10.111.169.94 8083/TCP,8086/TCP 26s
+ ```
+
+4. Desabilitando o complemento `metrics-server`:
+
+ ```shell
+ minikube addons disable metrics-server
+ ```
+
+ A saída será semelhante a:
+
+ ```
+ metrics-server was successfully disabled
+ ```
+
+## Removendo os recursos do Minikube
+
+Agora você pode remover todos os recursos criados no seu cluster:
+
+```shell
+kubectl delete service hello-node
+kubectl delete deployment hello-node
+```
+(**Opcional**) Pare a máquina virtual (VM) do Minikube:
+
+```shell
+minikube stop
+```
+(**Opcional**) Remova a VM do Minikube:
+
+```shell
+minikube delete
+```
+
+## Próximos passos
+
+* Aprender mais sobre [Deployment objects](/docs/concepts/workloads/controllers/deployment/).
+* Aprender mais sobre [Deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/).
+* Aprender mais sobre [Service objects](/docs/concepts/services-networking/service/).
+
diff --git a/content/pt/docs/tutorials/kubernetes-basics/_index.html b/content/pt/docs/tutorials/kubernetes-basics/_index.html
index 90f89ac3daa21..b4a247b3c6c6d 100644
--- a/content/pt/docs/tutorials/kubernetes-basics/_index.html
+++ b/content/pt/docs/tutorials/kubernetes-basics/_index.html
@@ -54,17 +54,17 @@ Módulos básicos do Kubernetes
-
+
-
+
diff --git a/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html b/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html
index 9be46e849db35..5ef10a9920ee8 100644
--- a/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html
+++ b/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html
@@ -25,7 +25,7 @@
diff --git a/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
index fd5025ab45277..971e84ba40493 100644
--- a/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
+++ b/content/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
@@ -100,7 +100,7 @@ Diagrama de Cluster
diff --git a/content/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
index a4f60e374cb60..a5bdb75e8d0eb 100644
--- a/content/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
+++ b/content/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
@@ -93,7 +93,7 @@ Implantar seu primeiro aplicativo no Kubernetes
- Para sua primeira implantação, você usará um aplicativo Node.js empacotado em um contêiner Docker.(Se você ainda não tentou criar um aplicativo Node.js e implantá-lo usando um contêiner, você pode fazer isso primeiro seguindo as instruções do tutorial Hello Minikube ).
+ Para sua primeira implantação, você usará um aplicativo Node.js empacotado em um contêiner Docker.(Se você ainda não tentou criar um aplicativo Node.js e implantá-lo usando um contêiner, você pode fazer isso primeiro seguindo as instruções do tutorial Olá, Minikube! ).
Agora que você sabe o que são implantações (Deployment), vamos para o tutorial online e implantar nosso primeiro aplicativo!
@@ -103,7 +103,7 @@
Implantar seu primeiro aplicativo no Kubernetes
diff --git a/content/ru/docs/reference/kubectl/cheatsheet.md b/content/ru/docs/reference/kubectl/cheatsheet.md
index d2be7e9c0c1a0..02a8a9bc4af1f 100644
--- a/content/ru/docs/reference/kubectl/cheatsheet.md
+++ b/content/ru/docs/reference/kubectl/cheatsheet.md
@@ -186,6 +186,9 @@ kubectl get pods --show-labels
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
+# Вывод декодированных секретов без внешних инструментов
+kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}'
+
# Вывести все секреты, используемые сейчас в поде.
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
diff --git a/content/vi/community/static/cncf-code-of-conduct.md b/content/vi/community/static/cncf-code-of-conduct.md
index 9d7008e902c7b..12c5472142bde 100644
--- a/content/vi/community/static/cncf-code-of-conduct.md
+++ b/content/vi/community/static/cncf-code-of-conduct.md
@@ -23,8 +23,8 @@ Quy tắc ứng xử này áp dụng cả trong không gian dự án và trong k
Các trường hợp lạm dụng, quấy rối hoặc hành vi không thể chấp nhận được trong Kubernetes có thể được báo cáo bằng cách liên hệ với [Ủy ban Quy tắc ứng xử Kubernetes](https://git.k8s.io/community/committee-code-of-conduct) thông qua
. Đối với các dự án khác, vui lòng liên hệ với người bảo trì dự án CNCF hoặc hòa giải viên của chúng tôi, Mishi Choudhary .
-Quy tắc ứng xử này được điều chỉnh từ Giao ước cộng tác viên (http://contributor-covenant.org), phiên bản 1.2.0, có sẵn tại
-http://contributor-covenant.org/version/1/2/0/
+Quy tắc ứng xử này được điều chỉnh từ Giao ước cộng tác viên (https://contributor-covenant.org), phiên bản 1.2.0, có sẵn tại
+https://contributor-covenant.org/version/1/2/0/
### Quy tắc ứng xử sự kiện CNCF
diff --git a/content/zh/blog/_posts/2020-10-01-contributing-to-the-development-guide/index.md b/content/zh/blog/_posts/2020-10-01-contributing-to-the-development-guide/index.md
new file mode 100644
index 0000000000000..01c1e2942faf2
--- /dev/null
+++ b/content/zh/blog/_posts/2020-10-01-contributing-to-the-development-guide/index.md
@@ -0,0 +1,211 @@
+---
+title: "为开发指南做贡献"
+linkTitle: "为开发指南做贡献"
+Author: Erik L. Arneson
+Description: "一位新的贡献者描述了编写和提交对 Kubernetes 开发指南的修改的经验。"
+date: 2020-10-01
+canonicalUrl: https://www.kubernetes.dev/blog/2020/09/28/contributing-to-the-development-guide/
+resources:
+- src: "jorge-castro-code-of-conduct.jpg"
+ title: "Jorge Castro 正在 SIG ContribEx 的周例会上宣布 Kubernetes 的行为准则。"
+---
+
+
+
+
+
+
+当大多数人想到为一个开源项目做贡献时,我猜想他们可能想到的是贡献代码修改、新功能和错误修复。作为一个软件工程师和一个长期的开源用户和贡献者,这也正是我的想法。
+虽然我已经在不同的工作流中写了不少文档,但规模庞大的 Kubernetes 社区是一种新型 "客户"。我只是不知道当 Google 要求我和 [Lion's Way](https://lionswaycontent.com/) 的同胞们对 Kubernetes 开发指南进行必要更新时会发生什么。
+
+*本文最初出现在 [Kubernetes Contributor Community blog](https://www.kubernetes.dev/blog/2020/09/28/contributing-to-the-development-guide/)。*
+
+
+
+## 与社区合作的乐趣
+
+作为专业的写手,我们习惯了受雇于他人去书写非常具体的项目。我们专注于技术服务,产品营销,技术培训以及文档编制,范围从相对宽松的营销邮件到针对 IT 和开发人员的深层技术白皮书。
+在这种专业服务下,每一个可交付的项目往往都有可衡量的投资回报。我知道在从事开源文档工作时不会出现这个指标,但我不确定它将如何改变我与项目的关系。
+
+
+
+我们的写作和传统客户之间的关系有一个主要的特点,就是我们在一个公司里面总是有一两个主要的对接人。他们负责审查我们的文稿,并确保文稿内容符合公司的声明且对标于他们正在寻找的受众。
+这随之而来的压力--正好解释了为什么我很高兴我的写作伙伴、鹰眼审稿人同时也是嗜血编辑的 [Joel](https://twitter.com/JoelByronBarker) 处理了大部分的客户联系。
+
+
+
+
+在与 Kubernetes 社区合作时,所有与客户接触的压力都消失了,这让我感到惊讶和高兴。
+
+
+
+"我必须得多仔细?如果我搞砸了怎么办?如果我让开发商生气了怎么办?如果我树敌了怎么办?"。
+当我第一次加入 Kubernetes Slack 上的 "#sig-contribex " 频道并宣布我将编写 [开发指南](https://github.com/kubernetes/community/blob/master/contributors/devel/development.md) 时,这些问题都在我脑海中奔腾,让我感觉如履薄冰。
+
+
+
+{{< imgproc jorge-castro-code-of-conduct Fit "800x450" >}}
+"Kubernetes 编码准则已经生效,让我们共同勉励。" — Jorge
+Castro, SIG ContribEx co-chair
+{{< /imgproc >}}
+
+
+
+事实上我的担心是多虑的。很快,我就感觉到自己是被欢迎的。我倾向于认为这不仅仅是因为我正在从事一项急需的任务,而是因为 Kubernetes 社区充满了友好、热情的人们。
+在每周的 SIG ContribEx 会议上,我们关于开发指南进展情况的报告会被立即纳入其中。此外,会议的领导会一直强调 [Kubernetes](https://www.kubernetes.dev/resources/code-of-conduct/) 编码准则,我们应该像 Bill 和 Ted 一样,相互进步。
+
+
+
+
+## 这并不意味着这一切都很简单
+
+开发指南需要一次全面检查。当我们拿到它的时候,它已经捆绑了大量的信息和很多新开发者需要经历的步骤,但随着时间的推移和被忽视,它变得相当陈旧。
+文档的确需要全局观,而不仅仅是点与点的修复。结果,最终我向这个项目提交了一个巨大的 pull 请求。[社区仓库](https://github.com/kubernetes/community):新增 267 行,删除 88 行。
+
+
+
+pull 请求的周期需要一定数量的 Kubernetes 组织成员审查和批准更改后才能合并。这是一个很好的做法,因为它使文档和代码都保持在相当不错的状态,
+但要哄骗合适的人花时间来做这样一个赫赫有名的审查是很难的。
+因此,那次大规模的 PR 从我第一次提交到最后合并,用了 26 天。 但最终,[它是成功的](https://github.com/kubernetes/community/pull/5003).
+
+
+
+由于 Kubernetes 是一个发展相当迅速的项目,而且开发人员通常对编写文档并不十分感兴趣,所以我也遇到了一个问题,那就是有时候,
+描述 Kubernetes 子系统工作原理的秘密珍宝被深埋在 [天才工程师的迷宫式思维](https://github.com/amwat) 中,而不是用单纯的英文写在 Markdown 文件中。
+当我要更新端到端(e2e)测试的入门文档时,就一头撞上了这个问题。
+
+
+
+这段旅程将我带出了编写文档的领域,进入到一些未完成软件的全新用户角色。最终我花了很多心思与新的 [kubetest2`框架](https://github.com/kubernetes-sigs/kubetest2) 的开发者之一合作,
+记录了最新 e2e 测试的启动和运行过程。
+你可以通过查看我的 [已完成的 pull request](https://github.com/kubernetes/community/pull/5045) 来自己判断结果。
+
+
+
+## 没有人是老板,每个人都给出反馈。
+
+但当我暗自期待混乱的时候,为 Kubernetes 开发指南做贡献以及与神奇的 Kubernetes 社区互动的过程却非常顺利。
+没有争执,我也没有树敌。每个人都非常友好和热情。这是令人*愉快的*。
+
+
+
+对于一个开源项目,没人是老板。Kubernetes 项目,一个近乎巨大的项目,被分割成许多不同的特殊兴趣小组(SIG)、工作组和社区。
+每个小组都有自己的定期会议、职责分配和主席推选。我的工作与 SIG ContribEx(负责监督并寻求改善贡献者体验)和 SIG Testing(负责测试)的工作有交集。
+事实证明,这两个 SIG 都很容易合作,他们渴望贡献,而且都是非常友好和热情的人。
+
+
+
+在 Kubernetes 这样一个活跃的、有生命力的项目中,文档仍然需要与代码库一起进行维护、修订和测试。
+开发指南将继续对 Kubernetes 代码库的新贡献者起到至关重要的作用,正如我们的努力所显示的那样,该指南必须与 Kubernetes 项目的发展保持同步。
+
+
+
+Joel 和我非常喜欢与 Kubernetes 社区互动并为开发指南做出贡献。我真的很期待,不仅能继续做出更多贡献,还能继续与过去几个月在这个庞大的开源社区中结识的新朋友进行合作。
diff --git a/content/zh/blog/_posts/2020-10-01-contributing-to-the-development-guide/jorge-castro-code-of-conduct.jpg b/content/zh/blog/_posts/2020-10-01-contributing-to-the-development-guide/jorge-castro-code-of-conduct.jpg
new file mode 100644
index 0000000000000..aeea042a7a292
Binary files /dev/null and b/content/zh/blog/_posts/2020-10-01-contributing-to-the-development-guide/jorge-castro-code-of-conduct.jpg differ
diff --git a/content/zh/blog/_posts/2020-12-02-dockershim-faq.md b/content/zh/blog/_posts/2020-12-02-dockershim-faq.md
new file mode 100644
index 0000000000000..b169eea610212
--- /dev/null
+++ b/content/zh/blog/_posts/2020-12-02-dockershim-faq.md
@@ -0,0 +1,316 @@
+---
+layout: blog
+title: "弃用 Dockershim 的常见问题"
+date: 2020-12-02
+slug: dockershim-faq
+aliases: [ '/dockershim' ]
+---
+
+
+
+本文回顾了自 Kubernetes v1.20 版宣布弃用 Dockershim 以来所引发的一些常见问题。
+关于 Kubernetes kubelets 从容器运行时的角度弃用 Docker 的细节以及这些细节背后的含义,请参考博文
+[别慌: Kubernetes 和 Docker](/blog/2020/12/02/dont-panic-kubernetes-and-docker/)
+
+
+### 为什么弃用 dockershim {#why-is-dockershim-being-deprecated}
+
+
+维护 dockershim 已经成为 Kubernetes 维护者肩头一个沉重的负担。
+创建 CRI 标准就是为了减轻这个负担,同时也可以增加不同容器运行时之间平滑的互操作性。
+但反观 Docker 却至今也没有实现 CRI,所以麻烦就来了。
+
+
+Dockershim 向来都是一个临时解决方案(因此得名:shim)。
+你可以进一步阅读
+[移除 Kubernetes 增强方案 Dockershim](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1985-remove-dockershim)
+以了解相关的社区讨论和计划。
+
+
+此外,与 dockershim 不兼容的一些特性,例如:控制组(cgoups)v2 和用户名字空间(user namespace),已经在新的 CRI 运行时中被实现。
+移除对 dockershim 的支持将加速这些领域的发展。
+
+
+### 在 Kubernetes 1.20 版本中,我还可以用 Docker 吗? {#can-I-still-use-docker-in-kubernetes-1.20}
+
+
+当然可以,在 1.20 版本中仅有的改变就是:如果使用 Docker 运行时,启动
+[kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/)
+的过程中将打印一条警告日志。
+
+
+### 什么时候移除 dockershim {#when-will-dockershim-be-removed}
+
+
+考虑到此改变带来的影响,我们使用了一个加长的废弃时间表。
+在 Kubernetes 1.22 版之前,它不会被彻底移除;换句话说,dockershim 被移除的最早版本会是 2021 年底发布 1.23 版。
+我们将与供应商以及其他生态团队紧密合作,确保顺利过渡,并将依据事态的发展评估后续事项。
+
+
+### 我现有的 Docker 镜像还能正常工作吗? {#will-my-existing-docker-image-still-work}
+
+
+当然可以,`docker build` 创建的镜像适用于任何 CRI 实现。
+所有你的现有镜像将和往常一样工作。
+
+
+### 私有镜像呢?{#what-about-private-images}
+
+
+当然可以。所有 CRI 运行时均支持 Kubernetes 中相同的拉取(pull)Secret 配置,
+不管是通过 PodSpec 还是通过 ServiceAccount 均可。
+
+
+### Docker 和容器是一回事吗? {#are-docker-and-containers-the-same-thing}
+
+
+虽然 Linux 的容器技术已经存在了很久,
+但 Docker 普及了 Linux 容器这种技术模式,并在开发底层技术方面发挥了重要作用。
+容器的生态相比于单纯的 Docker,已经进化到了一个更宽广的领域。
+像 OCI 和 CRI 这类标准帮助许多工具在我们的生态中成长和繁荣,
+其中一些工具替代了 Docker 的某些部分,另一些增强了现有功能。
+
+
+### 现在是否有在生产系统中使用其他运行时的例子? {#are-there-example-of-folks-using-other-runtimes-in-production-today}
+
+
+Kubernetes 所有项目在所有版本中出产的工件(Kubernetes 二进制文件)都经过了验证。
+
+
+此外,[kind](https://kind.sigs.k8s.io/) 项目使用 containerd 已经有年头了,
+并且在这个场景中,稳定性还明显得到提升。
+Kind 和 containerd 每天都会做多次协调,以验证对 Kubernetes 代码库的所有更改。
+其他相关项目也遵循同样的模式,从而展示了其他容器运行时的稳定性和可用性。
+例如,OpenShift 4.x 从 2019 年 6 月以来,就一直在生产环境中使用 [CRI-O](https://cri-o.io/) 运行时。
+
+
+至于其他示例和参考资料,你可以查看 containerd 和 CRI-O 的使用者列表,
+这两个容器运行时是云原生基金会([CNCF])下的项目。
+
+- [containerd](https://github.com/containerd/containerd/blob/master/ADOPTERS.md)
+- [CRI-O](https://github.com/cri-o/cri-o/blob/master/ADOPTERS.md)
+
+
+### 人们总在谈论 OCI,那是什么? {#people-keep-referenceing-oci-what-is-that}
+
+
+OCI 代表[开放容器标准](https://opencontainers.org/about/overview/),
+它标准化了容器工具和底层实现(technologies)之间的大量接口。
+他们维护了打包容器镜像(OCI image-spec)和运行容器(OCI runtime-spec)的标准规范。
+他们还以 [runc](https://github.com/opencontainers/runc)
+的形式维护了一个 runtime-spec 的真实实现,
+这也是 [containerd](https://containerd.io/) 和 [CRI-O](https://cri-o.io/) 依赖的默认运行时。
+CRI 建立在这些底层规范之上,为管理容器提供端到端的标准。
+
+
+### 我应该用哪个 CRI 实现? {#which-cri-implementation-should-I-use}
+
+
+这是一个复杂的问题,依赖于许多因素。
+在 Docker 工作良好的情况下,迁移到 containerd 是一个相对容易的转换,并将获得更好的性能和更少的开销。
+然而,我们建议你先探索 [CNCF 全景图](https://landscape.cncf.io/category=container-runtime&format=card-mode&grouping=category)
+提供的所有选项,以做出更适合你的环境的选择。
+
+
+### 当切换 CRI 底层实现时,我应该注意什么? {#what-should-I-look-out-for-when-changing-CRI-implementation}
+
+
+Docker 和大多数 CRI(包括 containerd)的底层容器化代码是相同的,但其周边部分却存在一些不同。
+迁移时一些常见的关注点是:
+
+
+
+- 日志配置
+- 运行时的资源限制
+- 直接访问 docker 命令或通过控制套接字调用 Docker 的节点供应脚本
+- 需要访问 docker 命令或控制套接字的 kubectl 插件
+- 需要直接访问 Docker 的 Kubernetes 工具(例如:kube-imagepuller)
+- 像 `registry-mirrors` 和不安全的注册表这类功能的配置
+- 需要 Docker 保持可用、且运行在 Kubernetes 之外的,其他支持脚本或守护进程(例如:监视或安全代理)
+- GPU 或特殊硬件,以及它们如何与你的运行时和 Kubernetes 集成
+
+
+如果你只是用了 Kubernetes 资源请求/限制或基于文件的日志收集 DaemonSet,它们将继续稳定工作,
+但是如果你用了自定义了 dockerd 配置,则可能需要为新容器运行时做一些适配工作。
+
+
+另外还有一个需要关注的点,那就是当创建镜像时,系统维护或嵌入容器方面的任务将无法工作。
+对于前者,可以用 [`crictl`](https://github.com/kubernetes-sigs/cri-tools) 工具作为临时替代方案
+(参见 [从 docker 命令映射到 crictl](https://kubernetes.io/zh/docs/tasks/debug-application-cluster/crictl/#mapping-from-docker-cli-to-crictl));
+对于后者,可以用新的容器创建选项,比如
+[img](https://github.com/genuinetools/img)、
+[buildah](https://github.com/containers/buildah)、
+[kaniko](https://github.com/GoogleContainerTools/kaniko)、或
+[buildkit-cli-for-kubectl](https://github.com/vmware-tanzu/buildkit-cli-for-kubectl
+),
+他们均不需要访问 Docker。
+
+
+对于 containerd,你可以从它们的
+[文档](https://github.com/containerd/cri/blob/master/docs/registry.md)
+开始,看看在迁移过程中有哪些配置选项可用。
+
+
+对于如何协同 Kubernetes 使用 containerd 和 CRI-O 的说明,参见 Kubernetes 文档中这部分:
+[容器运行时](/zh/docs/setup/production-environment/container-runtimes)。
+
+
+### 我还有问题怎么办?{#what-if-I-have-more-question}
+
+
+如果你使用了一个有供应商支持的 Kubernetes 发行版,你可以咨询供应商他们产品的升级计划。
+对于最终用户的问题,请把问题发到我们的最终用户社区的论坛:https://discuss.kubernetes.io/。
+
+
+你也可以看看这篇优秀的博文:
+[等等,Docker 刚刚被 Kubernetes 废掉了?](https://dev.to/inductor/wait-docker-is-deprecated-in-kubernetes-now-what-do-i-do-e4m)
+一个对此变化更深入的技术讨论。
+
+
+### 我可以加入吗?{#can-I-have-a-hug}
+
+
+只要你愿意,随时随地欢迎加入!
+
diff --git a/content/zh/docs/concepts/architecture/cloud-controller.md b/content/zh/docs/concepts/architecture/cloud-controller.md
index 47043364b81a9..7d84826583746 100644
--- a/content/zh/docs/concepts/architecture/cloud-controller.md
+++ b/content/zh/docs/concepts/architecture/cloud-controller.md
@@ -22,7 +22,7 @@ components.
使用云基础设施技术,你可以在公有云、私有云或者混合云环境中运行 Kubernetes。
Kubernetes 的信条是基于自动化的、API 驱动的基础设施,同时避免组件间紧密耦合。
-{{< glossary_definition term_id="cloud-controller-manager" length="all" prepend="组件 cloud-controller-manager 是">}}
+{{< glossary_definition term_id="cloud-controller-manager" length="all" prepend="组件 cloud-controller-manager 是指云控制器管理器,">}}
在温度计的例子中,如果房间很冷,那么某个控制器可能还会启动一个防冻加热器。
@@ -198,7 +198,7 @@ Kubernetes 采用了系统的云原生视图,并且可以处理持续的变化
在任务执行时,集群随时都可能被修改,并且控制回路会自动修复故障。
这意味着很可能集群永远不会达到稳定状态。
-只要集群中控制器的在运行并且进行有效的修改,整体状态的稳定与否是无关紧要的。
+只要集群中的控制器在运行并且进行有效的修改,整体状态的稳定与否是无关紧要的。
Kubernetes 通过将容器放入在节点(Node)上运行的 Pod 中来执行你的工作负载。
-节点可以是一个虚拟机或者物理机器,取决于所在的集群配置。每个节点都包含用于运行
-{{< glossary_tooltip text="Pod" term_id="pod" >}} 所需要的服务,这些服务由
-{{< glossary_tooltip text="控制面" term_id="control-plane" >}}负责管理。
+节点可以是一个虚拟机或者物理机器,取决于所在的集群配置。每个节点由
+{{< glossary_tooltip text="控制面" term_id="control-plane" >}} 负责管理,
+并包含运行 {{< glossary_tooltip text="Pods" term_id="pod" >}} 所需的服务。
通常集群中会有若干个节点;而在一个学习用或者资源受限的环境中,你的集群中也可能
只有一个节点。
diff --git a/content/zh/docs/concepts/cluster-administration/logging.md b/content/zh/docs/concepts/cluster-administration/logging.md
index 2a820144be8f4..44689dcac3bfb 100644
--- a/content/zh/docs/concepts/cluster-administration/logging.md
+++ b/content/zh/docs/concepts/cluster-administration/logging.md
@@ -14,44 +14,43 @@ weight: 60
应用日志可以让你了解应用内部的运行状况。日志对调试问题和监控集群活动非常有用。
-大部分现代化应用都有某种日志记录机制;同样地,大多数容器引擎也被设计成支持某种日志记录机制。
-针对容器化应用,最简单且受欢迎的日志记录方式就是写入标准输出和标准错误流。
+大部分现代化应用都有某种日志记录机制。同样地,容器引擎也被设计成支持日志记录。
+针对容器化应用,最简单且最广泛采用的日志记录方式就是写入标准输出和标准错误流。
-但是,由容器引擎或运行时提供的原生功能通常不足以满足完整的日志记录方案。
-例如,如果发生容器崩溃、Pod 被逐出或节点宕机等情况,你仍然想访问到应用日志。
-因此,日志应该具有独立的存储和生命周期,与节点、Pod 或容器的生命周期相独立。
-这个概念叫 _集群级的日志_ 。集群级日志方案需要一个独立的后台来存储、分析和查询日志。
-Kubernetes 没有为日志数据提供原生存储方案,但是你可以集成许多现有的日志解决方案到 Kubernetes 集群中。
+但是,由容器引擎或运行时提供的原生功能通常不足以构成完整的日志记录方案。
+例如,如果发生容器崩溃、Pod 被逐出或节点宕机等情况,你可能想访问应用日志。
+在集群中,日志应该具有独立的存储和生命周期,与节点、Pod 或容器的生命周期相独立。
+这个概念叫 _集群级的日志_ 。
-集群级日志架构假定在集群内部或者外部有一个日志后台。
-如果你对集群级日志不感兴趣,你仍会发现关于如何在节点上存储和处理日志的描述对你是有用的。
+集群级日志架构需要一个独立的后端用来存储、分析和查询日志。
+Kubernetes 并不为日志数据提供原生的存储解决方案。
+相反,有很多现成的日志方案可以集成到 Kubernetes 中.
+下面各节描述如何在节点上处理和存储日志。
## Kubernetes 中的基本日志记录
-本节,你会看到一个kubernetes 中生成基本日志的例子,该例子中数据被写入到标准输出。
-这里的示例为包含一个容器的 Pod 规约,该容器每秒钟向标准输出写入数据。
+这里的示例使用包含一个容器的 Pod 规约,每秒钟向标准输出写入数据。
{{< codenew file="debug/counter-pod.yaml" >}}
@@ -76,7 +75,7 @@ pod/counter created
-使用 `kubectl logs` 命令获取日志:
+像下面这样,使用 `kubectl logs` 命令获取日志:
```shell
kubectl logs counter
@@ -95,10 +94,10 @@ The output is:
```
-一旦发生容器崩溃,你可以使用命令 `kubectl logs` 和参数 `--previous` 检索之前的容器日志。
-如果 pod 中有多个容器,你应该向该命令附加一个容器名以访问对应容器的日志。
+你可以使用命令 `kubectl logs --previous` 检索之前容器实例的日志。
+如果 Pod 中有多个容器,你应该为该命令附加容器名以访问对应容器的日志。
详见 [`kubectl logs` 文档](/docs/reference/generated/kubectl/kubectl-commands#logs)。
容器化应用写入 `stdout` 和 `stderr` 的任何数据,都会被容器引擎捕获并被重定向到某个位置。
例如,Docker 容器引擎将这两个输出流重定向到某个
-[日志驱动](https://docs.docker.com/engine/admin/logging/overview) ,
+[日志驱动(Logging Driver)](https://docs.docker.com/engine/admin/logging/overview) ,
该日志驱动在 Kubernetes 中配置为以 JSON 格式写入文件。
-节点级日志记录中,需要重点考虑实现日志的轮转,以此来保证日志不会消耗节点上所有的可用空间。
-Kubernetes 当前并不负责轮转日志,而是通过部署工具建立一个解决问题的方案。
-例如,在 Kubernetes 集群中,用 `kube-up.sh` 部署一个每小时运行的工具
-[`logrotate`](https://linux.die.net/man/8/logrotate)。
-你也可以设置容器 runtime 来自动地轮转应用日志,比如使用 Docker 的 `log-opt` 选项。
-在 `kube-up.sh` 脚本中,使用后一种方式来处理 GCP 上的 COS 镜像,而使用前一种方式来处理其他环境。
-这两种方式,默认日志超过 10MB 大小时都会触发日志轮转。
+节点级日志记录中,需要重点考虑实现日志的轮转,以此来保证日志不会消耗节点上全部可用空间。
+Kubernetes 并不负责轮转日志,而是通过部署工具建立一个解决问题的方案。
+例如,在用 `kube-up.sh` 部署的 Kubernetes 集群中,存在一个
+[`logrotate`](https://linux.die.net/man/8/logrotate),每小时运行一次。
+你也可以设置容器运行时来自动地轮转应用日志。
例如,你可以找到关于 `kube-up.sh` 为 GCP 环境的 COS 镜像设置日志的详细信息,
-相应的脚本在
-[这里](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)。
+脚本为
+[`configure-helper` 脚本](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)。
当运行 [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) 时,
节点上的 kubelet 处理该请求并直接读取日志文件,同时在响应中返回日志文件内容。
{{< note >}}
-当前,如果有其他系统机制执行日志轮转,那么 `kubectl logs` 仅可查询到最新的日志内容。
-比如,一个 10MB 大小的文件,通过`logrotate` 执行轮转后生成两个文件,一个 10MB 大小,
-一个为空,所以 `kubectl logs` 将返回空。
+如果有外部系统执行日志轮转,那么 `kubectl logs` 仅可查询到最新的日志内容。
+比如,对于一个 10MB 大小的文件,通过 `logrotate` 执行轮转后生成两个文件,
+一个 10MB 大小,一个为空,`kubectl logs` 返回最新的日志文件,而该日志文件
+在这个例子中为空。
{{< /note >}}
* 在容器中运行的 kube-scheduler 和 kube-proxy。
-* 不在容器中运行的 kubelet 和容器运行时(例如 Docker)。
+* 不在容器中运行的 kubelet 和容器运行时。
-在使用 systemd 机制的服务器上,kubelet 和容器 runtime 写入日志到 journald。
-如果没有 systemd,他们写入日志到 `/var/log` 目录的 `.log` 文件。
-容器中的系统组件通常将日志写到 `/var/log` 目录,绕过了默认的日志机制。他们使用
-[klog](https://github.com/kubernetes/klog) 日志库。
-你可以在[日志开发文档](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)找到这些组件的日志告警级别协议。
+在使用 systemd 机制的服务器上,kubelet 和容器容器运行时将日志写入到 journald 中。
+如果没有 systemd,它们将日志写入到 `/var/log` 目录下的 `.log` 文件中。
+容器中的系统组件通常将日志写到 `/var/log` 目录,绕过了默认的日志机制。
+他们使用 [klog](https://github.com/kubernetes/klog) 日志库。
+你可以在[日志开发文档](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)
+找到这些组件的日志告警级别约定。
和容器日志类似,`/var/log` 目录中的系统组件日志也应该被轮转。
-通过脚本 `kube-up.sh` 启动的 Kubernetes 集群中,日志被工具 `logrotate` 执行每日轮转,
-或者日志大小超过 100MB 时触发轮转。
+通过脚本 `kube-up.sh` 启动的 Kubernetes 集群中,日志被工具 `logrotate`
+执行每日轮转,或者日志大小超过 100MB 时触发轮转。
## 集群级日志架构
-虽然Kubernetes没有为集群级日志记录提供原生的解决方案,但你可以考虑几种常见的方法。以下是一些选项:
+虽然Kubernetes没有为集群级日志记录提供原生的解决方案,但你可以考虑几种常见的方法。
+以下是一些选项:
* 使用在每个节点上运行的节点级日志记录代理。
-* 在应用程序的 pod 中,包含专门记录日志的 sidecar 容器。
+* 在应用程序的 Pod 中,包含专门记录日志的边车(Sidecar)容器。
* 将日志直接从应用程序中推送到日志记录后端。
-由于日志记录代理必须在每个节点上运行,它可以用 DaemonSet 副本,Pod 或 本机进程来实现。
-然而,后两种方法被弃用并且非常不别推荐。
+由于日志记录代理必须在每个节点上运行,通常可以用 `DaemonSet` 的形式运行该代理。
+节点级日志在每个节点上仅创建一个代理,不需要对节点上的应用做修改。
-对于 Kubernetes 集群来说,使用节点级的日志代理是最常用和被推荐的方式,
-因为在每个节点上仅创建一个代理,并且不需要对节点上的应用做修改。
-但是,节点级的日志 _仅适用于应用程序的标准输出和标准错误输出_。
+容器向标准输出和标准错误输出写出数据,但在格式上并不统一。
+节点级代理
+收集这些日志并将其进行转发以完成汇总。
-Kubernetes 并不指定日志代理,但是有两个可选的日志代理与 Kubernetes 发行版一起发布。
-[Stackdriver 日志](/zh/docs/tasks/debug-application-cluster/logging-stackdriver/)
-适用于 Google Cloud Platform,和
-[Elasticsearch](/zh/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/)。
-你可以在专门的文档中找到更多的信息和说明。
-两者都使用 [fluentd](https://www.fluentd.org/) 与自定义配置作为节点上的代理。
-
-
-### 使用 sidecar 容器和日志代理
+### 使用 sidecar 容器运行日志代理 {#sidecar-container-with-logging-agent}
-你可以通过以下方式之一使用 sidecar 容器:
+你可以通过以下方式之一使用边车(Sidecar)容器:
-* sidecar 容器将应用程序日志传送到自己的标准输出。
-* sidecar 容器运行一个日志代理,配置该日志代理以便从应用容器收集日志。
+* 边车容器将应用程序日志传送到自己的标准输出。
+* 边车容器运行一个日志代理,配置该日志代理以便从应用容器收集日志。
#### 传输数据流的 sidecar 容器
-
+
-利用 sidecar 容器向自己的 `stdout` 和 `stderr` 传输流的方式,
+利用边车容器向自己的 `stdout` 和 `stderr` 传输流的方式,
你就可以利用每个节点上的 kubelet 和日志代理来处理日志。
-sidecar 容器从文件、套接字或 journald 读取日志。
-每个 sidecar 容器打印其自己的 `stdout` 和 `stderr` 流。
+边车容器从文件、套接字或 journald 读取日志。
+每个边车容器向自己的 `stdout` 和 `stderr` 流中输出日志。
-考虑接下来的例子。pod 的容器向两个文件写不同格式的日志,下面是这个 pod 的配置文件:
+例如,某 Pod 中运行一个容器,该容器向两个文件写不同格式的日志。
+下面是这个 pod 的配置文件:
{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}}
-在同一个日志流中有两种不同格式的日志条目,这有点混乱,即使你试图重定向它们到容器的 `stdout` 流。
-取而代之的是,你可以引入两个 sidecar 容器。
-每一个 sidecar 容器可以从共享卷跟踪特定的日志文件,并重定向文件内容到各自的 `stdout` 流。
+不建议在同一个日志流中写入不同格式的日志条目,即使你成功地将其重定向到容器的
+`stdout` 流。相反,你可以创建两个边车容器。每个边车容器可以从共享卷
+跟踪特定的日志文件,并将文件内容重定向到各自的 `stdout` 流。
-这是运行两个 sidecar 容器的 Pod 文件。
+下面是运行两个边车容器的 Pod 的配置文件:
{{< codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" >}}
@@ -358,12 +350,18 @@ Here's a configuration file for a pod that has two sidecar containers:
Now when you run this pod, you can access each log stream separately by
running the following commands:
-->
-现在当你运行这个 Pod 时,你可以分别地访问每一个日志流,运行如下命令:
+现在当你运行这个 Pod 时,你可以运行如下命令分别访问每个日志流:
```shell
kubectl logs counter count-log-1
```
-```
+
+
+输出为:
+
+```console
0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001
@@ -373,7 +371,13 @@ kubectl logs counter count-log-1
```shell
kubectl logs counter count-log-2
```
-```
+
+
+输出为:
+
+```console
Mon Jan 1 00:00:00 UTC 2001 INFO 0
Mon Jan 1 00:00:01 UTC 2001 INFO 1
Mon Jan 1 00:00:02 UTC 2001 INFO 2
@@ -385,7 +389,8 @@ The node-level agent installed in your cluster picks up those log streams
automatically without any further configuration. If you like, you can configure
the agent to parse log lines depending on the source container.
-->
-集群中安装的节点级代理会自动获取这些日志流,而无需进一步配置。如果你愿意,你可以配置代理程序来解析源容器的日志行。
+集群中安装的节点级代理会自动获取这些日志流,而无需进一步配置。
+如果你愿意,你也可以配置代理程序来解析源容器的日志行。
-注意,尽管 CPU 和内存使用率都很低(以多个 cpu millicores 指标排序或者按内存的兆字节排序),
+注意,尽管 CPU 和内存使用率都很低(以多个 CPU 毫核指标排序或者按内存的兆字节排序),
向文件写日志然后输出到 `stdout` 流仍然会成倍地增加磁盘使用率。
-如果你的应用向单一文件写日志,通常最好设置 `/dev/stdout` 作为目标路径,而不是使用流式的 sidecar 容器方式。
+如果你的应用向单一文件写日志,通常最好设置 `/dev/stdout` 作为目标路径,
+而不是使用流式的边车容器方式。
-应用本身如果不具备轮转日志文件的功能,可以通过 sidecar 容器实现。
-该方式的一个例子是运行一个定期轮转日志的容器。
-然而,还是推荐直接使用 `stdout` 和 `stderr`,将日志的轮转和保留策略交给 kubelet。
+应用本身如果不具备轮转日志文件的功能,可以通过边车容器实现。
+该方式的一个例子是运行一个小的、定期轮转日志的容器。
+然而,还是推荐直接使用 `stdout` 和 `stderr`,将日志的轮转和保留策略
+交给 kubelet。
-### 具有日志代理功能的 sidecar 容器
+### 具有日志代理功能的边车容器
-
+
-如果节点级日志记录代理程序对于你的场景来说不够灵活,你可以创建一个带有单独日志记录代理程序的
-sidecar 容器,将代理程序专门配置为与你的应用程序一起运行。
+如果节点级日志记录代理程序对于你的场景来说不够灵活,你可以创建一个
+带有单独日志记录代理的边车容器,将代理程序专门配置为与你的应用程序一起运行。
-
-{{< note >}}
-在 sidecar 容器中使用日志代理会导致严重的资源损耗。
+在边车容器中使用日志代理会带来严重的资源损耗。
此外,你不能使用 `kubectl logs` 命令访问日志,因为日志并没有被 kubelet 管理。
{{< /note >}}
-例如,你可以使用 [Stackdriver](/zh/docs/tasks/debug-application-cluster/logging-stackdriver/),
-它使用 fluentd 作为日志记录代理。
-以下是两个可用于实现此方法的配置文件。
-第一个文件包含配置 fluentd 的
+下面是两个配置文件,可以用来实现一个带日志代理的边车容器。
+第一个文件包含用来配置 fluentd 的
[ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。
{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}
-
-{{< note >}}
-配置 fluentd 超出了本文的范围。要进一步了解如何配置 fluentd,
-请参考 [fluentd 官方文档](https://docs.fluentd.org/).
+要进一步了解如何配置 fluentd,请参考 [fluentd 官方文档](https://docs.fluentd.org/).
{{< /note >}}
-第二个文件描述了运行 fluentd sidecar 容器的 Pod 。flutend 通过 Pod 的挂载卷获取它的配置数据。
+第二个文件描述了运行 fluentd 边车容器的 Pod 。
+flutend 通过 Pod 的挂载卷获取它的配置数据。
{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}}
-一段时间后,你可以在 Stackdriver 界面看到日志消息。
-
-
-记住,这只是一个例子,事实上你可以用任何一个日志代理替换 fluentd ,并从应用容器中读取任何资源。
+在示例配置中,你可以将 fluentd 替换为任何日志代理,从应用容器内
+的任何来源读取数据。
-
### 从应用中直接暴露日志目录

-通过暴露或推送每个应用的日志,你可以实现集群级日志记录;
-然而,这种日志记录机制的实现已超出 Kubernetes 的范围。
-
+从各个应用中直接暴露和推送日志数据的集群日志机制
+已超出 Kubernetes 的范围。
diff --git a/content/zh/docs/concepts/overview/what-is-kubernetes.md b/content/zh/docs/concepts/overview/what-is-kubernetes.md
index 3fa58f5da09af..b976570edb887 100644
--- a/content/zh/docs/concepts/overview/what-is-kubernetes.md
+++ b/content/zh/docs/concepts/overview/what-is-kubernetes.md
@@ -61,11 +61,11 @@ Early on, organizations ran applications on physical servers. There was no way t
-->
**传统部署时代:**
-早期,组织在物理服务器上运行应用程序。无法为物理服务器中的应用程序定义资源边界,这会导致资源分配问题。
+早期,各个组织机构在物理服务器上运行应用程序。无法为物理服务器中的应用程序定义资源边界,这会导致资源分配问题。
例如,如果在物理服务器上运行多个应用程序,则可能会出现一个应用程序占用大部分资源的情况,
结果可能导致其他应用程序的性能下降。
一种解决方案是在不同的物理服务器上运行每个应用程序,但是由于资源利用不足而无法扩展,
-并且组织维护许多物理服务器的成本很高。
+并且维护许多物理服务器的成本很高。
### LIST 和 WATCH 过滤
-LIST and WATCH 操作可以使用查询参数指定标签选择算符过滤一组对象。
+LIST 和 WATCH 操作可以使用查询参数指定标签选择算符过滤一组对象。
两种需求都是允许的。(这里显示的是它们出现在 URL 查询字符串中)
如果你的集群启用了 IPv4/IPv6 双协议栈网络,则可以使用 IPv4 或 IPv6 地址来创建
{{< glossary_tooltip text="Service" term_id="service" >}}。
-
服务的地址族默认为第一个服务集群 IP 范围的地址族(通过 kube-apiserver 的 `--service-cluster-ip-range` 参数配置)。
-
当你定义服务时,可以选择将其配置为双栈。若要指定所需的行为,你可以设置 `.spec.ipFamilyPolicy` 字段为以下值之一:
2. 在集群上启用双栈时,带有选择算符的现有
[无头服务](/zh/docs/concepts/services-networking/service/#headless-services)
由控制面设置 `.spec.ipFamilyPolicy` 为 `SingleStack`
- 并设置 `.spec.ipFamilies` 为第一个服务群集 IP 范围的地址族(通过配置 kube-controller-manager 的
+ 并设置 `.spec.ipFamilies` 为第一个服务群集 IP 范围的地址族(通过配置 kube-apiserver 的
`--service-cluster-ip-range` 参数),即使 `.spec.ClusterIP` 的设置值为 `None` 也如此。
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
diff --git a/content/zh/docs/concepts/services-networking/ingress.md b/content/zh/docs/concepts/services-networking/ingress.md
index 97304e33cccfa..90bd23d2d7a95 100644
--- a/content/zh/docs/concepts/services-networking/ingress.md
+++ b/content/zh/docs/concepts/services-networking/ingress.md
@@ -705,7 +705,7 @@ sure the TLS secret you created came from a certificate that contains a Common
Name (CN), also known as a Fully Qualified Domain Name (FQDN) for `https-example.foo.com`.
-->
在 Ingress 中引用此 Secret 将会告诉 Ingress 控制器使用 TLS 加密从客户端到负载均衡器的通道。
-你需要确保创建的 TLS Secret 创建自包含 `sslexample.foo.com` 的公用名称(CN)的证书。
+你需要确保创建的 TLS Secret 创建自包含 `https-example.foo.com` 的公用名称(CN)的证书。
这里的公共名称也被称为全限定域名(FQDN)。
{{< note >}}
diff --git a/content/zh/docs/concepts/workloads/controllers/deployment.md b/content/zh/docs/concepts/workloads/controllers/deployment.md
index 1a39296d58935..14214e706617e 100644
--- a/content/zh/docs/concepts/workloads/controllers/deployment.md
+++ b/content/zh/docs/concepts/workloads/controllers/deployment.md
@@ -186,7 +186,7 @@ Follow the steps given below to create the above Deployment:
-->
* `NAME` 列出了集群中 Deployment 的名称。
* `READY` 显示应用程序的可用的 _副本_ 数。显示的模式是“就绪个数/期望个数”。
- * `UP-TO-DATE` 显示为了打到期望状态已经更新的副本数。
+ * `UP-TO-DATE` 显示为了达到期望状态已经更新的副本数。
* `AVAILABLE` 显示应用可供用户使用的副本数。
* `AGE` 显示应用程序运行的时间。
diff --git a/content/zh/docs/concepts/workloads/controllers/statefulset.md b/content/zh/docs/concepts/workloads/controllers/statefulset.md
index cdc03833162ef..4cd6606a38184 100644
--- a/content/zh/docs/concepts/workloads/controllers/statefulset.md
+++ b/content/zh/docs/concepts/workloads/controllers/statefulset.md
@@ -1,7 +1,7 @@
---
title: StatefulSets
content_type: concept
-weight: 40
+weight: 30
---
+上述例子中:
+
* 名为 `nginx` 的 Headless Service 用来控制网络域名。
* 名为 `web` 的 StatefulSet 有一个 Spec,它表明将在独立的 3 个 Pod 副本中启动 nginx 容器。
* `volumeClaimTemplates` 将通过 PersistentVolumes 驱动提供的
[PersistentVolumes](/zh/docs/concepts/storage/persistent-volumes/) 来提供稳定的存储。
+StatefulSet 的命名需要遵循[DNS 子域名](zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)规范。
+
@@ -217,9 +227,48 @@ StatefulSet 可以使用 [无头服务](/zh/docs/concepts/services-networking/se
一旦每个 Pod 创建成功,就会得到一个匹配的 DNS 子域,格式为:
`$(pod 名称).$(所属服务的 DNS 域名)`,其中所属服务由 StatefulSet 的 `serviceName` 域来设定。
+
+取决于集群域内部 DNS 的配置,有可能无法查询一个刚刚启动的 Pod 的 DNS 命名。
+当集群内其他客户端在 Pod 创建完成前发出 Pod 主机名查询时,就会发生这种情况。
+负缓存 (在 DNS 中较为常见) 意味着之前失败的查询结果会被记录和重用至少若干秒钟,
+即使 Pod 已经正常运行了也是如此。
+
+如果需要在 Pod 被创建之后及时发现它们,有以下选项:
+
+- 直接查询 Kubernetes API(比如,利用 watch 机制)而不是依赖于 DNS 查询
+- 缩短 Kubernetes DNS 驱动的缓存时长(通常这意味着修改 CoreDNS 的 ConfigMap,目前缓存时长为 30 秒)
+
+正如[限制](#limitations)中所述,你需要负责创建[无头服务](/zh/docs/concepts/services-networking/service/#headless-services)
+以便为 Pod 提供网络标识。
+
下面给出一些选择集群域、服务名、StatefulSet 名、及其怎样影响 StatefulSet 的 Pod 上的 DNS 名称的示例:
@@ -350,12 +399,14 @@ described [above](#deployment-and-scaling-guarantees).
`Parallel` pod management tells the StatefulSet controller to launch or
terminate all Pods in parallel, and to not wait for Pods to become Running
and Ready or completely terminated prior to launching or terminating another
-Pod.
+Pod. This option only affects the behavior for scaling operations. Updates are not affected.
+
-->
#### 并行 Pod 管理 {#parallel-pod-management}
`Parallel` Pod 管理让 StatefulSet 控制器并行的启动或终止所有的 Pod,
启动或者终止其他 Pod 前,无需等待 Pod 进入 Running 和 ready 或者完全停止状态。
+这个选项只会影响伸缩操作的行为,更新则不会被影响。
你可以使用 kubectl 命令生成以下资源, `kubectl create --dry-run=client -o yaml`:
-```
- clusterrole 创建 ClusterRole。
- clusterrolebinding 为特定的 ClusterRole 创建 ClusterRoleBinding。
- configmap 使用本地文件、目录或文本值创建 Configmap。
- cronjob 使用指定的名称创建 Cronjob。
- deployment 使用指定的名称创建 Deployment。
- job 使用指定的名称创建 Job。
- namespace 使用指定的名称创建名称空间。
- poddisruptionbudget 使用指定名称创建 Pod 干扰预算。
- priorityclass 使用指定的名称创建 Priorityclass。
- quota 使用指定的名称创建配额。
- role 使用单一规则创建角色。
- rolebinding 为特定角色或 ClusterRole 创建 RoleBinding。
- secret 使用指定的子命令创建 Secret。
- service 使用指定的子命令创建服务。
- serviceaccount 使用指定的名称创建服务帐户。
-```
+
+* `clusterrole`: 创建 ClusterRole。
+* `clusterrolebinding`: 为特定的 ClusterRole 创建 ClusterRoleBinding。
+* `configmap`: 使用本地文件、目录或文本值创建 Configmap。
+* `cronjob`: 使用指定的名称创建 Cronjob。
+* `deployment`: 使用指定的名称创建 Deployment。
+* `job`: 使用指定的名称创建 Job。
+* `namespace`: 使用指定的名称创建名称空间。
+* `poddisruptionbudget`: 使用指定名称创建 Pod 干扰预算。
+* `priorityclass`: 使用指定的名称创建 Priorityclass。
+* `quota`: 使用指定的名称创建配额。
+* `role`: 使用单一规则创建角色。
+* `rolebinding`: 为特定角色或 ClusterRole 创建 RoleBinding。
+* `secret`: 使用指定的子命令创建 Secret。
+* `service`: 使用指定的子命令创建服务。
+* `serviceaccount`: 使用指定的名称创建服务帐户。
+
### `kubectl apply`
diff --git a/content/zh/docs/reference/kubectl/docker-cli-to-kubectl.md b/content/zh/docs/reference/kubectl/docker-cli-to-kubectl.md
index a89f43c32c347..b53e6fcd77a19 100644
--- a/content/zh/docs/reference/kubectl/docker-cli-to-kubectl.md
+++ b/content/zh/docs/reference/kubectl/docker-cli-to-kubectl.md
@@ -62,7 +62,7 @@ kubectl create deployment --image=nginx nginx-app
deployment.apps/nginx-app created
```
-```
+```shell
# add env to nginx-app
kubectl set env deployment/nginx-app DOMAIN=cluster
```
diff --git a/content/zh/docs/tasks/access-application-cluster/connecting-frontend-backend.md b/content/zh/docs/tasks/access-application-cluster/connecting-frontend-backend.md
index a485d9ba3a341..42ebe8d250a7c 100644
--- a/content/zh/docs/tasks/access-application-cluster/connecting-frontend-backend.md
+++ b/content/zh/docs/tasks/access-application-cluster/connecting-frontend-backend.md
@@ -345,5 +345,5 @@ kubectl delete deployment frontend backend
-->
* 进一步了解 [Service](/zh/docs/concepts/services-networking/service/)
* 进一步了解 [ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)
-* 进一步了解 [Service 和 Pods 的 DNS](/docs/concepts/services-networking/dns-pod-service/)
+* 进一步了解 [Service 和 Pods 的 DNS](/zh/docs/concepts/services-networking/dns-pod-service/)
diff --git a/content/zh/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md b/content/zh/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md
new file mode 100644
index 0000000000000..e3d8d89d9463b
--- /dev/null
+++ b/content/zh/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you.md
@@ -0,0 +1,164 @@
+---
+title: 检查弃用 Dockershim 对你的影响
+content_type: task
+weight: 20
+---
+
+
+
+
+
+Kubernetes 的 `dockershim` 组件使得你可以把 Docker 用作 Kubernetes 的
+{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}。
+在 Kubernetes v1.20 版本中,内建组件 `dockershim` 被弃用。
+
+
+本页讲解你的集群把 Docker 用作容器运行时的运作机制,
+并提供使用 `dockershim` 时,它所扮演角色的详细信息,
+继而展示了一组验证步骤,可用来检查弃用 `dockershim` 对你的工作负载的影响。
+
+
+## 检查你的应用是否依赖于 Docker {#find-docker-dependencies}
+
+
+虽然你通过 Docker 创建了应用容器,但这些容器却可以运行于所有容器运行时。
+所以这种使用 Docker 容器运行时的方式并不构成对 Docker 的依赖。
+
+
+当用了替代的容器运行时之后,Docker 命令可能不工作,甚至产生意外的输出。
+这才是判定你是否依赖于 Docker 的方法。
+
+
+1. 确认没有特权 Pod 执行 docker 命令。
+2. 检查 Kubernetes 基础架构外部节点上的脚本和应用,确认它们没有执行 Docker 命令。可能的命令有:
+ - SSH 到节点排查故障;
+ - 节点启动脚本;
+ - 直接安装在节点上的监视和安全代理。
+3. 检查执行了上述特权操作的第三方工具。详细操作请参考:
+ [从 dockershim 迁移遥测和安全代理](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/)
+4. 确认没有对 dockershim 行为的间接依赖。这是一种极端情况,不太可能影响你的应用。
+ 一些工具很可能被配置为使用了 Docker 特性,比如,基于特定指标发警报,或者在故障排查指令的一个环节中搜索特定的日志信息。
+ 如果你有此类配置的工具,需要在迁移之前,在测试集群上完成功能验证。
+
+
+
+## Docker 依赖详解 {#role-of-dockershim}
+
+
+[容器运行时](/zh/docs/concepts/containers/#container-runtimes)是一个软件,用来运行组成 Kubernetes Pod 的容器。
+Kubernetes 负责编排和调度 Pod;在每一个节点上,
+{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}
+使用抽象的容器运行时接口,所以你可以任意选用兼容的容器运行时。
+
+
+在早期版本中,Kubernetes 提供的兼容性只支持一个容器运行时:Docker。
+在 Kubernetes 发展历史中,集群运营人员希望采用更多的容器运行时。
+于是 CRI 被设计出来满足这类灵活性需要 - 而 kubelet 亦开始支持 CRI。
+然而,因为 Docker 在 CRI 规范创建之前就已经存在,Kubernetes 就创建了一个适配器组件:`dockershim`。
+dockershim 适配器允许 kubelet 与 Docker交互,就好像 Docker 是一个 CRI 兼容的运行时一样。
+
+
+你可以阅读博文
+[Kubernetes 容器集成功能的正式发布](/zh/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/)
+
+
+
+
+
+切换到容器运行时 Containerd 可以消除掉中间环节。
+所有以前遗留的容器可由 Containerd 这类容器运行时来运行和管理,操作体验也和以前一样。
+但是现在,由于直接用容器运行时调度容器,所以它们对 Docker 来说是不可见的。
+因此,你以前用来检查这些容器的 Docker 工具或漂亮的 UI 都不再可用。
+
+
+你不能再使用 `docker ps` 或 `docker inspect` 命令来获取容器信息。
+由于你不能列出容器,因此你不能获取日志、停止容器,甚至不能通过 `docker exec` 在容器中执行命令。
+
+
+{{< note >}}
+
+如果你用 Kubernetes 运行工作负载,最好通过 Kubernetes API停止容器,而不是通过容器运行时
+(此建议适用于所有容器运行时,不仅仅是针对 Docker)。
+
+{{< /note >}}
+
+
+你仍然可以下载镜像,或者用 `docker build` 命令创建它们。
+但用 Docker 创建、下载的镜像,对于容器运行时和 Kubernetes,均不可见。
+为了在 Kubernetes 中使用,需要把镜像推送(push)到某注册中心。
diff --git a/content/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md b/content/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md
new file mode 100644
index 0000000000000..b22c7d4b67eff
--- /dev/null
+++ b/content/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents.md
@@ -0,0 +1,157 @@
+---
+title: 从 dockershim 迁移遥测和安全代理
+content_type: task
+weight: 70
+---
+
+
+
+
+
+在 Kubernetes 1.20 版本中,dockershim 被弃用。
+在博文[弃用 Dockershim 常见问题](/zh/blog/2020/12/02/dockershim-faq/)中,
+你大概已经了解到,大多数应用并没有直接通过运行时来托管容器。
+但是,仍然有大量的遥测和安全代理依赖 docker 来收集容器元数据、日志和指标。
+本文汇总了一些信息和链接:信息用于阐述如何探查这些依赖,链接用于解释如何迁移这些代理去使用通用的工具或其他容器运行。
+
+
+## 遥测和安全代理 {#telemetry-and-security-agents}
+
+
+为了让代理运行在 Kubernetes 集群中,我们有几种办法。
+代理既可以直接在节点上运行,也可以作为守护进程运行。
+
+
+### 为什么遥测代理依赖于 Docker? {#why-do-telemetry-agents-relyon-docker}
+
+
+因为历史原因,Kubernetes 建立在 Docker 之上。
+Kubernetes 管理网络和调度,Docker 则在具体的节点上定位并操作容器。
+所以,你可以从 Kubernetes 取得调度相关的元数据,比如 Pod 名称;从 Docker 取得容器状态信息。
+后来,人们开发了更多的运行时来管理容器。
+同时一些项目和 Kubernetes 特性也不断涌现,支持跨多个运行时收集容器状态信息。
+
+
+一些代理和 Docker 工具紧密绑定。此类代理可以这样运行命令,比如用
+[`docker ps`](https://docs.docker.com/engine/reference/commandline/ps/)
+或 [`docker top`](https://docs.docker.com/engine/reference/commandline/top/)
+这类命令来列出容器和进程,用
+[docker logs](https://docs.docker.com/engine/reference/commandline/logs/)
+订阅 Docker 的日志。
+但随着 Docker 作为容器运行时被弃用,这些命令将不再工作。
+
+
+### 识别依赖于 Docker 的 DaemonSet {#identify-docker-dependency}
+
+
+如果某 Pod 想调用运行在节点上的 `dockerd`,该 Pod 必须满足以下两个条件之一:
+
+- 将包含 Docker 守护进程特权套接字的文件系统挂载为一个{{< glossary_tooltip text="卷" term_id="volume" >}};或
+- 直接以卷的形式挂载 Docker 守护进程特权套接字的特定路径。
+
+
+举例来说:在 COS 镜像中,Docker 通过 `/var/run/docker.sock` 开放其 Unix 域套接字。
+这意味着 Pod 的规约中需要包含 `hostPath` 卷以挂载 `/var/run/docker.sock`。
+
+
+下面是一个 shell 示例脚本,用于查找包含直接映射 Docker 套接字的挂载点的 Pod。
+你也可以删掉 grep `/var/run/docker.sock` 这一代码片段以查看其它挂载信息。
+
+```bash
+kubectl get pods --all-namespaces \
+-o=jsonpath='{range .items[*]}{"\n"}{.metadata.namespace}{":\t"}{.metadata.name}{":\t"}{range .spec.volumes[*]}{.hostPath.path}{", "}{end}{end}' \
+| sort \
+| grep '/var/run/docker.sock'
+```
+
+
+{{< note >}}
+对于 Pod 来说,访问宿主机上的 Docker 还有其他方式。
+例如,可以挂载 `/var/run` 的父目录而非其完整路径
+(就像[这个例子](https://gist.github.com/itaysk/7bc3e56d69c4d72a549286d98fd557dd))。
+上述脚本只检测最常见的使用方式。
+{{< /note >}}
+
+
+### 检测节点代理对 Docker 的依赖性 {#detecting-docker-dependency-from-node-agents}
+
+
+在你的集群节点被定制、且在各个节点上均安装了额外的安全和遥测代理的场景下,
+一定要和代理的供应商确认:该代理是否依赖于 Docker。
+
+
+### 遥测和安全代理的供应商 {#telemetry-and-security-agent-vendors}
+
+
+我们通过
+[谷歌文档](https://docs.google.com/document/d/1ZFi4uKit63ga5sxEiZblfb-c23lFhvy6RXVPikS8wf0/edit#)
+提供了为各类遥测和安全代理供应商准备的持续更新的迁移指导。
+请与供应商联系,获取从 dockershim 迁移的最新说明。
diff --git a/content/zh/docs/tasks/administer-cluster/safely-drain-node.md b/content/zh/docs/tasks/administer-cluster/safely-drain-node.md
index a6f2a934423f8..4528aa1550005 100644
--- a/content/zh/docs/tasks/administer-cluster/safely-drain-node.md
+++ b/content/zh/docs/tasks/administer-cluster/safely-drain-node.md
@@ -4,7 +4,6 @@ content_type: task
min-kubernetes-server-version: 1.5
---
@@ -264,7 +262,14 @@ eviction API will never return anything other than 429 or 500.
For example: this can happen if ReplicaSet is creating Pods for your application but
the replacement Pods do not become `Ready`. You can also see similar symptoms if the
last Pod evicted has a very long termination grace period.
+-->
+## 驱逐阻塞
+在某些情况下,应用程序可能会到达一个中断状态,除了 429 或 500 之外,它将永远不会返回任何内容。
+例如 ReplicaSet 创建的替换 Pod 没有变成就绪状态,或者被驱逐的最后一个
+Pod 有很长的终止宽限期,就会发生这种情况。
+
+
-## 驱逐阻塞
-
-在某些情况下,应用程序可能会到达一个中断状态,除了 429 或 500 之外,它将永远不会返回任何内容。
-例如 ReplicaSet 创建的替换 Pod 没有变成就绪状态,或者被驱逐的最后一个
-Pod 有很长的终止宽限期,就会发生这种情况。
-
在这种情况下,有两种可能的解决方案:
- 中止或暂停自动操作。调查应用程序卡住的原因,并重新启动自动化。
-- 经过适当的长时间等待后, 从集群中删除 Pod 而不是使用驱逐 API。
+- 经过适当的长时间等待后,从集群中删除 Pod 而不是使用驱逐 API。
Kubernetes 并没有具体说明在这种情况下应该采取什么行为,
这应该由应用程序所有者和集群所有者紧密沟通,并达成对行动一致意见。
## {{% heading "whatsnext" %}}
-
+-->
* 执行[配置 PDB](/zh/docs/tasks/run-application/configure-pdb/)中的各个步骤,
保护你的应用
-* 进一步了解[节点维护](/zh/docs/tasks/administer-cluster/cluster-management/#maintenance-on-a-node)。
diff --git a/content/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
index 9c3e1cf6fd1ea..a55821b51fa5c 100644
--- a/content/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
+++ b/content/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
@@ -512,23 +512,71 @@ Defaults to 3. Minimum value is 1.
就绪探测情况下的放弃 Pod 会被打上未就绪的标签。默认值是 3。最小值是 1。
+在 Kubernetes 1.20 版本之前,exec 探针会忽略 `timeoutSeconds`:探针会无限期地
+持续运行,甚至可能超过所配置的限期,直到返回结果为止。
+
+
+这一缺陷在 Kubernetes v1.20 版本中得到修复。你可能一直依赖于之前错误的探测行为,
+甚至你都没有觉察到这一问题的存在,因为默认的超时值是 1 秒钟。
+作为集群管理员,你可以在所有的 kubelet 上禁用 `ExecProbeTimeout`
+[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
+(将其设置为 `false`),从而恢复之前版本中的运行行为,之后当集群中所有的
+exec 探针都设置了 `timeoutSeconds` 参数后,移除此标志重载。
+如果你有 Pods 受到此默认 1 秒钟超时值的影响,你应该更新 Pod 对应的探针的
+超时值,这样才能为最终去除该特性门控做好准备。
+
+
+当此缺陷被修复之后,在使用 `dockershim` 容器运行时的 Kubernetes `1.20+`
+版本中,对于 exec 探针而言,容器中的进程可能会因为超时值的设置保持持续运行,
+即使探针返回了失败状态。
+
+{{< caution >}}
+
+如果就绪态探针的实现不正确,可能会导致容器中进程的数量不断上升。
+如果不对其采取措施,很可能导致资源枯竭的状况。
+{{< /caution >}}
+
+
+### HTTP 探测 {#http-probes}
+
[HTTP Probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core)
可以在 `httpGet` 上配置额外的字段:
* `host`:连接使用的主机名,默认是 Pod 的 IP。也可以在 HTTP 头中设置 “Host” 来代替。
* `scheme` :用于设置连接主机的方式(HTTP 还是 HTTPS)。默认是 HTTP。
-* `path`:访问 HTTP 服务的路径。
+* `path`:访问 HTTP 服务的路径。默认值为 "/"。
* `httpHeaders`:请求中自定义的 HTTP 头。HTTP 头字段允许重复。
* `port`:访问容器的端口号或者端口名。如果数字必须在 1 ~ 65535 之间。
@@ -542,10 +590,6 @@ Here's one scenario where you would set it. Suppose the Container listens on 127
and the Pod's `hostNetwork` field is true. Then `host`, under `httpGet`, should be set
to 127.0.0.1. If your pod relies on virtual hosts, which is probably the more common
case, you should not use `host`, but rather set the `Host` header in `httpHeaders`.
-
-For a TCP probe, the kubelet makes the probe connection at the node, not in the pod, which
-means that you can not use a service name in the `host` parameter since the kubelet is unable
-to resolve it.
-->
对于 HTTP 探测,kubelet 发送一个 HTTP 请求到指定的路径和端口来执行检测。
除非 `httpGet` 中的 `host` 字段设置了,否则 kubelet 默认是给 Pod 的 IP 地址发送探测。
@@ -556,6 +600,61 @@ to resolve it.
可能更常见的情况是如果 Pod 依赖虚拟主机,你不应该设置 `host` 字段,而是应该在
`httpHeaders` 中设置 `Host`。
+
+针对 HTTP 探针,kubelet 除了必需的 `Host` 头部之外还发送两个请求头部字段:
+`User-Agent` 和 `Accept`。这些头部的默认值分别是 `kube-probe/{{ skew latestVersion >}}`
+(其中 `{{< skew latestVersion >}}` 是 kubelet 的版本号)和 `*/*`。
+
+你可以通过为探测设置 `.httpHeaders` 来重载默认的头部字段值;例如:
+
+```yaml
+livenessProbe:
+ httpGet:
+ httpHeaders:
+ - name: Accept
+ value: application/json
+
+startupProbe:
+ httpGet:
+ httpHeaders:
+ - name: User-Agent
+ value: MyUserAgent
+```
+
+
+你也可以通过将这些头部字段定义为空值,从请求中去掉这些头部字段。
+
+```yaml
+livenessProbe:
+ httpGet:
+ httpHeaders:
+ - name: Accept
+ value: ""
+
+startupProbe:
+ httpGet:
+ httpHeaders:
+ - name: User-Agent
+ value: ""
+```
+
+
+### TCP 探测 {#tcp-probes}
+
对于一次 TCP 探测,kubelet 在节点上(不是在 Pod 里面)建立探测连接,
这意味着你不能在 `host` 参数上配置服务名称,因为 kubelet 不能解析服务名称。
diff --git a/content/zh/docs/tasks/debug-application-cluster/events-stackdriver.md b/content/zh/docs/tasks/debug-application-cluster/events-stackdriver.md
deleted file mode 100644
index 04a20a7174b7d..0000000000000
--- a/content/zh/docs/tasks/debug-application-cluster/events-stackdriver.md
+++ /dev/null
@@ -1,156 +0,0 @@
----
-content_type: concept
-title: StackDriver 中的事件
----
-
-
-
-
-
-
-
-Kubernetes 事件是一种对象,它为用户提供了洞察集群内发生的事情的能力,
-例如调度程序做出了什么决定,或者为什么某些 Pod 被逐出节点。
-你可以在[应用程序自检和调试](/zh/docs/tasks/debug-application-cluster/debug-application-introspection/)
-中阅读有关使用事件调试应用程序的更多信息。
-
-
-因为事件是 API 对象,所以它们存储在主控节点上的 API 服务器中。
-为了避免主节点磁盘空间被填满,将强制执行保留策略:事件在最后一次发生的一小时后将会被删除。
-为了提供更长的历史记录和聚合能力,应该安装第三方解决方案来捕获事件。
-
-
-本文描述了一个将 Kubernetes 事件导出为 Stackdriver Logging 的解决方案,在这里可以对它们进行处理和分析。
-
-
-{{< note >}}
-不能保证集群中发生的所有事件都将导出到 Stackdriver。
-事件不能导出的一种可能情况是事件导出器没有运行(例如,在重新启动或升级期间)。
-在大多数情况下,可以将事件用于设置
-[metrics](https://cloud.google.com/logging/docs/view/logs_based_metrics) 和
-[alerts](https://cloud.google.com/logging/docs/view/logs_based_metrics#creating_an_alerting_policy)
-等目的,但你应该注意其潜在的不准确性。
-{{< /note >}}
-
-
-
-
-## 部署 {#deployment}
-
-### Google Kubernetes Engine
-
-
-
-在 Google Kubernetes Engine 中,如果启用了云日志,那么事件导出器默认部署在主节点运行版本为 1.7 及更高版本的集群中。
-为了防止干扰你的工作负载,事件导出器没有设置资源,并且处于尽力而为的 QoS 类型中,这意味着它将在资源匮乏的情况下第一个被杀死。
-如果要导出事件,请确保有足够的资源给事件导出器 Pod 使用。
-这可能会因为工作负载的不同而有所不同,但平均而言,需要大约 100MB 的内存和 100m 的 CPU。
-
-
-### 部署到现有集群
-
-使用下面的命令将事件导出器部署到你的集群:
-
-```shell
-kubectl create -f https://k8s.io/examples/debug/event-exporter.yaml
-```
-
-
-
-由于事件导出器访问 Kubernetes API,因此它需要权限才能访问。
-以下的部署配置为使用 RBAC 授权。
-它设置服务帐户和集群角色绑定,以允许事件导出器读取事件。
-为了确保事件导出器 Pod 不会从节点中退出,你可以另外设置资源请求。
-如前所述,100MB 内存和 100m CPU 应该就足够了。
-
-{{< codenew file="debug/event-exporter.yaml" >}}
-
-
-## 用户指南 {#user-guide}
-
-事件在 Stackdriver Logging 中被导出到 `GKE Cluster` 资源。
-你可以通过从可用资源的下拉菜单中选择适当的选项来找到它们:
-
-
-
-
-
-你可以使用 Stackdriver Logging 的
-[过滤机制](https://cloud.google.com/logging/docs/view/advanced_filters)
-基于事件对象字段进行过滤。
-例如,下面的查询将显示调度程序中有关 Deployment `nginx-deployment` 中的 Pod 的事件:
-
-```
-resource.type="gke_cluster"
-jsonPayload.kind="Event"
-jsonPayload.source.component="default-scheduler"
-jsonPayload.involvedObject.name:"nginx-deployment"
-```
-
-{{< figure src="/images/docs/stackdriver-event-exporter-filter.png" alt="在 Stackdriver 接口中过滤的事件" width="500" >}}
-
-
diff --git a/content/zh/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md b/content/zh/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md
deleted file mode 100644
index 2dbabad039e13..0000000000000
--- a/content/zh/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana.md
+++ /dev/null
@@ -1,197 +0,0 @@
----
-content_type: concept
-title: 使用 ElasticSearch 和 Kibana 进行日志管理
----
-
-
-
-
-
-
-在 Google Compute Engine (GCE) 平台上,默认的日志管理支持目标是
-[Stackdriver Logging](https://cloud.google.com/logging/),
-在[使用 Stackdriver Logging 管理日志](/zh/docs/tasks/debug-application-cluster/logging-stackdriver/)
-中详细描述了这一点。
-
-
-本文介绍了如何设置一个集群,将日志导入
-[Elasticsearch](https://www.elastic.co/products/elasticsearch),并使用
-[Kibana](https://www.elastic.co/products/kibana) 查看日志,作为在 GCE 上
-运行应用时使用 Stackdriver Logging 管理日志的替代方案。
-
-
-{{< note >}}
-你不能在 Google Kubernetes Engine 平台运行的 Kubernetes 集群上自动部署
-Elasticsearch 和 Kibana。你必须手动部署它们。
-{{< /note >}}
-
-
-
-
-要使用 Elasticsearch 和 Kibana 处理集群日志,你应该在使用 kube-up.sh
-脚本创建集群时设置下面所示的环境变量:
-
-```shell
-KUBE_LOGGING_DESTINATION=elasticsearch
-```
-
-
-你还应该确保设置了 `KUBE_ENABLE_NODE_LOGGING=true` (这是 GCE 平台的默认设置)。
-
-
-现在,当你创建集群时,将有一条消息将指示每个节点上运行的 fluentd 日志收集守护进程
-以 ElasticSearch 为日志输出目标:
-
-```shell
-cluster/kube-up.sh
-```
-
-```
-...
-Project: kubernetes-satnam
-Zone: us-central1-b
-... calling kube-up
-Project: kubernetes-satnam
-Zone: us-central1-b
-+++ Staging server tars to Google Storage: gs://kubernetes-staging-e6d0e81793/devel
-+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = 6987c098277871b6d69623141276924ab687f89d)
-+++ kubernetes-salt.tar.gz uploaded (sha1 = bdfc83ed6b60fa9e3bff9004b542cfc643464cd0)
-Looking for already existing resources
-Starting master and configuring firewalls
-Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/zones/us-central1-b/disks/kubernetes-master-pd].
-NAME ZONE SIZE_GB TYPE STATUS
-kubernetes-master-pd us-central1-b 20 pd-ssd READY
-Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip].
-+++ Logging using Fluentd to elasticsearch
-```
-
-
-每个节点的 Fluentd Pod、Elasticsearch Pod 和 Kibana Pod 都应该在集群启动后不久运行在
-kube-system 名字空间中。
-
-```shell
-kubectl get pods --namespace=kube-system
-```
-
-```
-NAME READY STATUS RESTARTS AGE
-elasticsearch-logging-v1-78nog 1/1 Running 0 2h
-elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h
-fluentd-elasticsearch-kubernetes-node-5oq0 1/1 Running 0 2h
-fluentd-elasticsearch-kubernetes-node-6896 1/1 Running 0 2h
-fluentd-elasticsearch-kubernetes-node-l1ds 1/1 Running 0 2h
-fluentd-elasticsearch-kubernetes-node-lz9j 1/1 Running 0 2h
-kibana-logging-v1-bhpo8 1/1 Running 0 2h
-kube-dns-v3-7r1l9 3/3 Running 0 2h
-monitoring-heapster-v4-yl332 1/1 Running 1 2h
-monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h
-```
-
-
-`fluentd-elasticsearch` Pod 从每个节点收集日志并将其发送到 `elasticsearch-logging` Pod,
-该 Pod 是名为 `elasticsearch-logging` 的
-[服务](/zh/docs/concepts/services-networking/service/)的一部分。
-这些 ElasticSearch pod 存储日志,并通过 REST API 将其公开。
-`kibana-logging` pod 提供了一个用于读取 ElasticSearch 中存储的日志的 Web UI,
-它是名为 `kibana-logging` 的服务的一部分。
-
-
-
-Elasticsearch 和 Kibana 服务都位于 `kube-system` 名字空间中,并且没有通过
-可公开访问的 IP 地址直接暴露。要访问它们,请参照
-[访问集群中运行的服务](/zh/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster)
-的说明进行操作。
-
-
-如果你想在浏览器中访问 `elasticsearch-logging` 服务,你将看到类似下面的状态页面:
-
-
-
-
-现在你可以直接在浏览器中输入 Elasticsearch 查询,如果你愿意的话。
-请参考 [Elasticsearch 的文档](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request.html)
-以了解这样做的更多细节。
-
-
-
-或者,你可以使用 Kibana 查看集群的日志(再次使用
-[访问集群中运行的服务的说明](/zh/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster))。
-第一次访问 Kibana URL 时,将显示一个页面,要求你配置所接收日志的视图。
-选择时间序列值的选项,然后选择 `@timestamp`。
-在下面的页面中选择 `Discover` 选项卡,然后你应该能够看到所摄取的日志。
-你可以将刷新间隔设置为 5 秒,以便定期刷新日志。
-
-
-
-以下是从 Kibana 查看器中摄取日志的典型视图:
-
-
-
-## {{% heading "whatsnext" %}}
-
-
-Kibana 为浏览你的日志提供了各种强大的选项!有关如何深入研究它的一些想法,
-请查看 [Kibana 的文档](https://www.elastic.co/guide/en/kibana/current/discover.html)。
-
diff --git a/content/zh/docs/test.md b/content/zh/docs/test.md
index d873d6cb6d9fb..b6ee5be151782 100644
--- a/content/zh/docs/test.md
+++ b/content/zh/docs/test.md
@@ -783,7 +783,7 @@ sequenceDiagram
{{ mermaid >}}
在官方网站上有更多的[示例](https://mermaid-js.github.io/mermaid/#/examples)。
diff --git a/content/zh/docs/tutorials/stateful-application/zookeeper.md b/content/zh/docs/tutorials/stateful-application/zookeeper.md
index 500259f5fa5a3..3f5baf3c27599 100644
--- a/content/zh/docs/tutorials/stateful-application/zookeeper.md
+++ b/content/zh/docs/tutorials/stateful-application/zookeeper.md
@@ -1,5 +1,10 @@
---
-approvers:
+title: 运行 ZooKeeper,一个分布式协调系统
+content_type: tutorial
+weight: 40
+---
+
@@ -20,8 +25,11 @@ Kubernetes using [StatefulSets](/docs/concepts/workloads/controllers/statefulset
[PodDisruptionBudgets](/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget),
and [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
-->
-
-本教程展示了在 Kubernetes 上使用 [StatefulSets](/zh/docs/concepts/workloads/controllers/statefulset/),[PodDisruptionBudgets](/zh/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget) 和 [PodAntiAffinity](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#亲和与反亲和) 特性运行 [Apache Zookeeper](https://zookeeper.apache.org)。
+本教程展示了在 Kubernetes 上使用
+[StatefulSet](/zh/docs/concepts/workloads/controllers/statefulset/),
+[PodDisruptionBudget](/zh/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget) 和
+[PodAntiAffinity](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#亲和与反亲和)
+特性运行 [Apache Zookeeper](https://zookeeper.apache.org)。
## {{% heading "prerequisites" %}}
@@ -29,44 +37,45 @@ and [PodAntiAffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affini
Before starting this tutorial, you should be familiar with the following
Kubernetes concepts.
-->
-
在开始本教程前,你应该熟悉以下 Kubernetes 概念。
-- [Pods](/zh/docs/concepts/workloads/pods/)
-- [Cluster DNS](/zh/docs/concepts/services-networking/dns-pod-service/)
-- [Headless Services](/zh/docs/concepts/services-networking/service/#headless-services)
-- [PersistentVolumes](/zh/docs/concepts/storage/persistent-volumes/)
-- [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/)
-- [StatefulSets](/zh/docs/concepts/workloads/controllers/statefulset/)
-- [PodDisruptionBudgets](/zh/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget)
-- [PodAntiAffinity](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#亲和与反亲和)
-- [kubectl CLI](/zh/docs/reference/kubectl/kubectl/)
+- [Pods](/zh/docs/concepts/workloads/pods/)
+- [集群 DNS](/zh/docs/concepts/services-networking/dns-pod-service/)
+- [无头服务(Headless Service)](/zh/docs/concepts/services-networking/service/#headless-services)
+- [PersistentVolumes](/zh/docs/concepts/storage/persistent-volumes/)
+- [PersistentVolume 制备](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/)
+- [StatefulSet](/zh/docs/concepts/workloads/controllers/statefulset/)
+- [PodDisruptionBudget](/zh/docs/concepts/workloads/pods/disruptions/#pod-disruption-budget)
+- [PodAntiAffinity](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#亲和与反亲和)
+- [kubectl CLI](/zh/docs/reference/kubectl/kubectl/)
+你需要一个至少包含四个节点的集群,每个节点至少 2 CPUs 和 4 GiB 内存。
+在本教程中你将会隔离(Cordon)和腾空(Drain )集群的节点。
+**这意味着集群节点上所有的 Pods 将会被终止并移除。这些节点也会暂时变为不可调度**。
+在本教程中你应该使用一个独占的集群,或者保证你造成的干扰不会影响其它租户。
+
-
-你需要一个至少包含四个节点的集群,每个节点至少 2 CPUs 和 4 GiB 内存。在本教程中你将会 cordon 和 drain 集群的节点。**这意味着集群节点上所有的 Pods 将会被终止并移除**。**这些节点也会暂时变为不可调度**。在本教程中你应该使用一个独占的集群,或者保证你造成的干扰不会影响其它租户。
-
-本教程假设你的集群配置为动态的提供 PersistentVolumes。如果你的集群没有配置成这样,在开始本教程前,你需要手动准备三个 20 GiB 的卷。
-
+本教程假设你的集群配置为动态的提供 PersistentVolumes。
+如果你的集群没有配置成这样,在开始本教程前,你需要手动准备三个 20 GiB 的卷。
## {{% heading "objectives" %}}
-
在学习本教程后,你将熟悉下列内容。
* 如何使用 StatefulSet 部署一个 ZooKeeper ensemble。
@@ -74,11 +83,10 @@ After this tutorial, you will know the following.
* 如何在 ensemble 中 分布 ZooKeeper 服务器的部署。
* 如何在计划维护中使用 PodDisruptionBudgets 确保服务可用性。
-
+### ZooKeeper {#zookeeper-basics}
+[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/)
+是一个分布式的开源协调服务,用于分布式系统。
+ZooKeeper 允许你读取、写入数据和发现数据更新。
+数据按层次结构组织在文件系统中,并复制到 ensemble(一个 ZooKeeper 服务器的集合)
+中所有的 ZooKeeper 服务器。对数据的所有操作都是原子的和顺序一致的。
+ZooKeeper 通过
+[Zab](https://pdfs.semanticscholar.org/b02c/6b00bd5dbdbd951fddb00b906c82fa80f0b3.pdf)
+一致性协议在 ensemble 的所有服务器之间复制一个状态机来确保这个特性。
+
+
+Ensemble 使用 Zab 协议选举一个领导者,在选举出领导者前不能写入数据。
+一旦选举出了领导者,ensemble 使用 Zab 保证所有写入被复制到一个 quorum,
+然后这些写入操作才会被确认并对客户端可用。
+如果没有遵照加权 quorums,一个 quorum 表示包含当前领导者的 ensemble 的多数成员。
+例如,如果 ensemble 有 3 个服务器,一个包含领导者的成员和另一个服务器就组成了一个
+quorum。
+如果 ensemble 不能达成一个 quorum,数据将不能被写入。
+
-
-### ZooKeeper 基础
-
-[Apache ZooKeeper](https://zookeeper.apache.org/doc/current/) 是一个分布式的开源协调服务,用于分布式系统。ZooKeeper 允许你读取、写入数据和发现数据更新。数据按层次结构组织在文件系统中,并复制到 ensemble(一个 ZooKeeper 服务器的集合) 中所有的 ZooKeeper 服务器。对数据的所有操作都是原子的和顺序一致的。ZooKeeper 通过 [Zab](https://pdfs.semanticscholar.org/b02c/6b00bd5dbdbd951fddb00b906c82fa80f0b3.pdf) 一致性协议在 ensemble 的所有服务器之间复制一个状态机来确保这个特性。
-
-ensemble 使用 Zab 协议选举一个 leader,在选举出 leader 前不能写入数据。一旦选举出了 leader,ensemble 使用 Zab 保证所有写入被复制到一个 quorum,然后这些写入操作才会被确认并对客户端可用。如果没有遵照加权 quorums,一个 quorum 表示包含当前 leader 的 ensemble 的多数成员。例如,如果 ensemble 有3个服务器,一个包含 leader 的成员和另一个服务器就组成了一个 quorum。如果 ensemble 不能达成一个 quorum,数据将不能被写入。
-
-ZooKeeper 在内存中保存它们的整个状态机,但是每个改变都被写入一个在存储介质上的持久 WAL(Write Ahead Log)。当一个服务器故障时,它能够通过回放 WAL 恢复之前的状态。为了防止 WAL 无限制的增长,ZooKeeper 服务器会定期的将内存状态快照保存到存储介质。这些快照能够直接加载到内存中,所有在这个快照之前的 WAL 条目都可以被安全的丢弃。
+ZooKeeper 在内存中保存它们的整个状态机,但是每个改变都被写入一个在存储介质上的
+持久 WAL(Write Ahead Log)。
+当一个服务器出现故障时,它能够通过回放 WAL 恢复之前的状态。
+为了防止 WAL 无限制的增长,ZooKeeper 服务器会定期的将内存状态快照保存到存储介质。
+这些快照能够直接加载到内存中,所有在这个快照之前的 WAL 条目都可以被安全的丢弃。
-
## 创建一个 ZooKeeper Ensemble
下面的清单包含一个
-[Headless Service](/zh/docs/concepts/services-networking/service/#headless-services),
+[无头服务](/zh/docs/concepts/services-networking/service/#headless-services),
一个 [Service](/zh/docs/concepts/services-networking/service/),
一个 [PodDisruptionBudget](/zh/docs/concepts/workloads/pods/disruptions/#specifying-a-poddisruptionbudget),
和一个 [StatefulSet](/zh/docs/concepts/workloads/controllers/statefulset/)。
@@ -127,8 +152,8 @@ Open a terminal, and use the
[`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) command to create the
manifest.
-->
-
-打开一个命令行终端,使用 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply)
+打开一个命令行终端,使用命令
+[`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply)
创建这个清单。
```shell
@@ -139,8 +164,8 @@ kubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml
This creates the `zk-hs` Headless Service, the `zk-cs` Service,
the `zk-pdb` PodDisruptionBudget, and the `zk` StatefulSet.
-->
-
-这个操作创建了 `zk-hs` Headless Service、`zk-cs` Service、`zk-pdb` PodDisruptionBudget 和 `zk` StatefulSet。
+这个操作创建了 `zk-hs` 无头服务、`zk-cs` 服务、`zk-pdb` PodDisruptionBudget
+和 `zk` StatefulSet。
```
service/zk-hs created
@@ -153,8 +178,9 @@ statefulset.apps/zk created
Use [`kubectl get`](/docs/reference/generated/kubectl/kubectl-commands/#get) to watch the
StatefulSet controller create the StatefulSet's Pods.
-->
-
-使用 [`kubectl get`](/docs/reference/generated/kubectl/kubectl-commands/#get) 查看 StatefulSet 控制器创建的 Pods。
+使用命令
+[`kubectl get`](/docs/reference/generated/kubectl/kubectl-commands/#get)
+查看 StatefulSet 控制器创建的 Pods。
```shell
kubectl get pods -w -l app=zk
@@ -163,7 +189,6 @@ kubectl get pods -w -l app=zk
-
一旦 `zk-2` Pod 变成 Running 和 Ready 状态,使用 `CRTL-C` 结束 kubectl。
```
@@ -189,8 +214,8 @@ zk-2 1/1 Running 0 40s
The StatefulSet controller creates three Pods, and each Pod has a container with
a [ZooKeeper](https://www-us.apache.org/dist/zookeeper/stable/) server.
-->
-
-StatefulSet 控制器创建了3个 Pods,每个 Pod 包含一个 [ZooKeeper](https://www-us.apache.org/dist/zookeeper/stable/) 服务器。
+StatefulSet 控制器创建 3 个 Pods,每个 Pod 包含一个
+[ZooKeeper](https://www-us.apache.org/dist/zookeeper/stable/) 服务器。
+### 促成 Leader 选举 {#facilitating-leader-election}
-### 促成 Leader 选举
-
-由于在匿名网络中没有用于选举 leader 的终止算法,Zab 要求显式的进行成员关系配置,以执行 leader 选举。Ensemble 中的每个服务器都需要具有一个独一无二的标识符,所有的服务器均需要知道标识符的全集,并且每个标识符都需要和一个网络地址相关联。
+由于在匿名网络中没有用于选举 leader 的终止算法,Zab 要求显式的进行成员关系配置,
+以执行 leader 选举。Ensemble 中的每个服务器都需要具有一个独一无二的标识符,
+所有的服务器均需要知道标识符的全集,并且每个标识符都需要和一个网络地址相关联。
-使用 [`kubectl exec`](/docs/reference/generated/kubectl/kubectl-commands/#exec) 获取 `zk` StatefulSet 中 Pods 的主机名。
+使用命令
+[`kubectl exec`](/docs/reference/generated/kubectl/kubectl-commands/#exec)
+获取 `zk` StatefulSet 中 Pods 的主机名。
```shell
for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
@@ -215,8 +243,10 @@ for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The hostnames take the form of `-`. Because the `replicas` field of the `zk` StatefulSet is set to `3`, the Set's controller creates three Pods with their hostnames set to `zk-0`, `zk-1`, and
`zk-2`.
-->
-
-StatefulSet 控制器基于每个 Pod 的序号索引为它们各自提供一个唯一的主机名。主机名采用 `-` 的形式。由于 `zk` StatefulSet 的 `replicas` 字段设置为3,这个 Set 的控制器将创建3个 Pods,主机名为:`zk-0`、`zk-1` 和 `zk-2`。
+StatefulSet 控制器基于每个 Pod 的序号索引为它们各自提供一个唯一的主机名。
+主机名采用 `-<序数索引>` 的形式。
+由于 `zk` StatefulSet 的 `replicas` 字段设置为 3,这个集合的控制器将创建
+3 个 Pods,主机名为:`zk-0`、`zk-1` 和 `zk-2`。
```
zk-0
@@ -229,8 +259,8 @@ The servers in a ZooKeeper ensemble use natural numbers as unique identifiers, a
To examine the contents of the `myid` file for each server use the following command.
-->
-
-ZooKeeper ensemble 中的服务器使用自然数作为唯一标识符,每个服务器的标识符都保存在服务器的数据目录中一个名为 `myid` 的文件里。
+ZooKeeper ensemble 中的服务器使用自然数作为唯一标识符,
+每个服务器的标识符都保存在服务器的数据目录中一个名为 `myid` 的文件里。
检查每个服务器的 `myid` 文件的内容。
@@ -241,7 +271,6 @@ for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeepe
-
由于标识符为自然数并且序号索引是非负整数,你可以在序号上加 1 来生成一个标识符。
```
@@ -256,8 +285,7 @@ myid zk-2
-
-获取 `zk` StatefulSet 中每个 Pod 的 FQDN (Fully Qualified Domain Name,正式域名)。
+获取 `zk` StatefulSet 中每个 Pod 的全限定域名(Fully Qualified Domain Name,FQDN)。
```shell
for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
@@ -267,8 +295,7 @@ for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
The `zk-hs` Service creates a domain for all of the Pods,
`zk-hs.default.svc.cluster.local`.
-->
-
-`zk-hs` Service 为所有 Pods 创建了一个 domain:`zk-hs.default.svc.cluster.local`。
+`zk-hs` Service 为所有 Pods 创建了一个域:`zk-hs.default.svc.cluster.local`。
```
zk-0.zk-hs.default.svc.cluster.local
@@ -281,10 +308,13 @@ The A records in [Kubernetes DNS](/docs/concepts/services-networking/dns-pod-ser
ZooKeeper stores its application configuration in a file named `zoo.cfg`. Use `kubectl exec` to view the contents of the `zoo.cfg` file in the `zk-0` Pod.
-->
+[Kubernetes DNS](/zh/docs/concepts/services-networking/dns-pod-service/)
+中的 A 记录将 FQDNs 解析成为 Pods 的 IP 地址。
+如果 Pods 被调度,这个 A 记录将会使用 Pods 的新 IP 地址完成更新,
+但 A 记录的名称不会改变。
-[Kubernetes DNS](/zh/docs/concepts/services-networking/dns-pod-service/) 中的 A 记录将 FQDNs 解析成为 Pods 的 IP 地址。如果 Pods 被调度,这个 A 记录将会使用 Pods 的新 IP 地址更新,但 A 记录的名称不会改变。
-
-ZooKeeper 在一个名为 `zoo.cfg` 的文件中保存它的应用配置。使用 `kubectl exec` 在 `zk-0` Pod 中查看 `zoo.cfg` 文件的内容。
+ZooKeeper 在一个名为 `zoo.cfg` 的文件中保存它的应用配置。
+使用 `kubectl exec` 在 `zk-0` Pod 中查看 `zoo.cfg` 文件的内容。
```shell
kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
@@ -296,8 +326,9 @@ the file, the `1`, `2`, and `3` correspond to the identifiers in the
ZooKeeper servers' `myid` files. They are set to the FQDNs for the Pods in
the `zk` StatefulSet.
-->
-
-文件底部为 `server.1`、`server.2` 和 `server.3`,其中的 `1`、`2`和`3`分别对应 ZooKeeper 服务器的 `myid` 文件中的标识符。它们被设置为 `zk` StatefulSet 中的 Pods 的 FQDNs。
+文件底部为 `server.1`、`server.2` 和 `server.3`,其中的 `1`、`2` 和 `3`
+分别对应 ZooKeeper 服务器的 `myid` 文件中的标识符。
+它们被设置为 `zk` StatefulSet 中的 Pods 的 FQDNs。
```
clientPort=2181
@@ -317,14 +348,17 @@ server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
```
+### 达成共识 {#achieving-consensus}
-### 达成一致
-
- 一致性协议要求每个参与者的标识符唯一。在 Zab 协议里任何两个参与者都不应该声明相同的唯一标识符。对于让系统中的进程协商哪些进程已经提交了哪些数据而言,这是必须的。如果有两个 Pods 使用相同的序号启动,这两个 ZooKeeper 服务器会将自己识别为相同的服务器。
+ 一致性协议要求每个参与者的标识符唯一。
+在 Zab 协议里任何两个参与者都不应该声明相同的唯一标识符。
+对于让系统中的进程协商哪些进程已经提交了哪些数据而言,这是必须的。
+如果有两个 Pods 使用相同的序号启动,这两个 ZooKeeper 服务器
+会将自己识别为相同的服务器。
```shell
kubectl get pods -w -l app=zk
@@ -355,8 +389,10 @@ the FQDNs of the ZooKeeper servers will resolve to a single endpoint, and that
endpoint will be the unique ZooKeeper server claiming the identity configured
in its `myid` file.
-->
-
-每个 Pod 的 A 记录仅在 Pod 变成 Ready状态时被录入。因此,ZooKeeper 服务器的 FQDNs 只会解析到一个 endpoint,而那个 endpoint 将会是一个唯一的 ZooKeeper 服务器,这个服务器声明了配置在它的 `myid` 文件中的标识符。
+每个 Pod 的 A 记录仅在 Pod 变成 Ready状态时被录入。
+因此,ZooKeeper 服务器的 FQDNs 只会解析到一个端点,而那个端点将会是
+一个唯一的 ZooKeeper 服务器,这个服务器声明了配置在它的 `myid`
+文件中的标识符。
```
zk-0.zk-hs.default.svc.cluster.local
@@ -369,7 +405,8 @@ This ensures that the `servers` properties in the ZooKeepers' `zoo.cfg` files
represents a correctly configured ensemble.
-->
-这保证了 ZooKeepers 的 `zoo.cfg` 文件中的 `servers` 属性代表了一个正确配置的 ensemble。
+这保证了 ZooKeepers 的 `zoo.cfg` 文件中的 `servers` 属性代表了
+一个正确配置的 ensemble。
```
server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
@@ -380,8 +417,10 @@ server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
-
-当服务器使用 Zab 协议尝试提交一个值的时候,它们会达成一致并成功提交这个值(如果 leader 选举成功并且至少有两个 Pods 处于 Running 和 Ready状态),或者将会失败(如果没有满足上述条件中的任意一条)。当一个服务器承认另一个服务器的代写时不会有状态产生。
+当服务器使用 Zab 协议尝试提交一个值的时候,它们会达成一致并成功提交这个值
+(如果领导者选举成功并且至少有两个 Pods 处于 Running 和 Ready状态),
+或者将会失败(如果没有满足上述条件中的任意一条)。
+当一个服务器承认另一个服务器的代写时不会有状态产生。
-
### Ensemble 健康检查
-最基本的健康检查是向一个 ZooKeeper 服务器写入一些数据,然后从另一个服务器读取这些数据。
+最基本的健康检查是向一个 ZooKeeper 服务器写入一些数据,然后从
+另一个服务器读取这些数据。
使用 `zkCli.sh` 脚本在 `zk-0` Pod 上写入 `world` 到路径 `/hello`。
@@ -411,8 +450,7 @@ Created /hello
-
-从 `zk-1` Pod 获取数据。
+使用下面的命令从 `zk-1` Pod 获取数据。
```shell
kubectl exec zk-1 zkCli.sh get /hello
@@ -422,8 +460,7 @@ kubectl exec zk-1 zkCli.sh get /hello
The data that you created on `zk-0` is available on all the servers in the
ensemble.
-->
-
-你在 `zk-0` 创建的数据在 ensemble 中所有的服务器上都是可用的。
+你在 `zk-0` 上创建的数据在 ensemble 中所有的服务器上都是可用的。
```
WATCHER::
@@ -455,12 +492,15 @@ state machine.
Use the [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#delete) command to delete the
`zk` StatefulSet.
-->
+### 提供持久存储
-### 准备持久存储
+如同在 [ZooKeeper](#zookeeper-basics) 一节所提到的,ZooKeeper 提交
+所有的条目到一个持久 WAL,并周期性的将内存快照写入存储介质。
+对于使用一致性协议实现一个复制状态机的应用来说,使用 WALs 提供持久化
+是一种常用的技术,对于普通的存储应用也是如此。
-如同在 [ZooKeeper 基础](#zookeeper-基础) 一节所提到的,ZooKeeper 提交所有的条目到一个持久 WAL,并周期性的将内存快照写入存储介质。对于使用一致性协议实现一个复制状态机的应用来说,使用 WALs 提供持久化是一种常用的技术,对于普通的存储应用也是如此。
-
-使用 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#delete) 删除 `zk` StatefulSet。
+使用 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#delete)
+删除 `zk` StatefulSet。
```shell
kubectl delete statefulset zk
@@ -473,7 +513,6 @@ statefulset.apps "zk" deleted
-
观察 StatefulSet 中的 Pods 变为终止状态。
```shell
@@ -483,7 +522,6 @@ kubectl get pods -w -l app=zk
-
当 `zk-0` 完全终止时,使用 `CRTL-C` 结束 kubectl。
```
@@ -504,8 +542,7 @@ zk-0 0/1 Terminating 0 11m
-
-重新应用 `zookeeper.yaml` 中的代码清单。
+重新应用 `zookeeper.yaml` 中的清单。
```shell
kubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml
@@ -516,7 +553,6 @@ This creates the `zk` StatefulSet object, but the other API objects in the manif
Watch the StatefulSet controller recreate the StatefulSet's Pods.
-->
-
`zk` StatefulSet 将会被创建。由于清单中的其他 API 对象已经存在,所以它们不会被修改。
观察 StatefulSet 控制器重建 StatefulSet 的 Pods。
@@ -528,7 +564,6 @@ kubectl get pods -w -l app=zk
-
一旦 `zk-2` Pod 处于 Running 和 Ready 状态,使用 `CRTL-C` 停止 kubectl命令。
```
@@ -554,7 +589,6 @@ zk-2 1/1 Running 0 40s
Use the command below to get the value you entered during the [sanity test](#sanity-testing-the-ensemble),
from the `zk-2` Pod.
-->
-
从 `zk-2` Pod 中获取你在[健康检查](#Ensemble-健康检查)中输入的值。
```shell
@@ -564,8 +598,8 @@ kubectl exec zk-2 zkCli.sh get /hello
-
-尽管 `zk` StatefulSet 中所有的 Pods 都已经被终止并重建过,ensemble 仍然使用原来的数值提供服务。
+尽管 `zk` StatefulSet 中所有的 Pods 都已经被终止并重建过,ensemble
+仍然使用原来的数值提供服务。
```
WATCHER::
@@ -588,8 +622,8 @@ numChildren = 0
-
-`zk` StatefulSet 的 `spec` 中的 `volumeClaimTemplates` 字段标识了将要为每个 Pod 准备的 PersistentVolume。
+`zk` StatefulSet 的 `spec` 中的 `volumeClaimTemplates` 字段标识了
+将要为每个 Pod 准备的 PersistentVolume。
```yaml
volumeClaimTemplates:
@@ -610,10 +644,9 @@ the `StatefulSet`.
Use the following command to get the `StatefulSet`'s `PersistentVolumeClaims`.
-->
+`StatefulSet` 控制器为 `StatefulSet` 中的每个 Pod 生成一个 `PersistentVolumeClaim`。
-StatefulSet 控制器为 StatefulSet 中的每个 Pod 生成一个 PersistentVolumeClaim。
-
-获取 StatefulSet 的 PersistentVolumeClaims。
+获取 `StatefulSet` 的 `PersistentVolumeClaim`。
```shell
kubectl get pvc -l app=zk
@@ -622,8 +655,7 @@ kubectl get pvc -l app=zk
-
-当 StatefulSet 重新创建它的 Pods时,Pods 的 PersistentVolumes 会被重新挂载。
+当 `StatefulSet` 重新创建它的 Pods 时,Pods 的 PersistentVolumes 会被重新挂载。
```
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
@@ -635,8 +667,8 @@ datadir-zk-2 Bound pvc-bee0817e-bcb1-11e6-994f-42010a800002 20Gi R
-
-StatefulSet 的容器 `template` 中的 `volumeMounts` 一节使得 PersistentVolumes 被挂载到 ZooKeeper 服务器的数据目录。
+StatefulSet 的容器 `template` 中的 `volumeMounts` 一节使得
+PersistentVolumes 被挂载到 ZooKeeper 服务器的数据目录。
```shell
volumeMounts:
@@ -650,11 +682,13 @@ same `PersistentVolume` mounted to the ZooKeeper server's data directory.
Even when the Pods are rescheduled, all the writes made to the ZooKeeper
servers' WALs, and all their snapshots, remain durable.
-->
-
-当 `zk` StatefulSet 中的一个 Pod 被(重新)调度时,它总是拥有相同的 PersistentVolume,挂载到 ZooKeeper 服务器的数据目录。即使在 Pods 被重新调度时,所有对 ZooKeeper 服务器的 WALs 的写入和它们的全部快照都仍然是持久的。
+当 `zk` StatefulSet 中的一个 Pod 被(重新)调度时,它总是拥有相同的 PersistentVolume,
+挂载到 ZooKeeper 服务器的数据目录。
+即使在 Pods 被重新调度时,所有对 ZooKeeper 服务器的 WALs 的写入和它们的
+全部快照都仍然是持久的。
-
## 确保一致性配置
-如同在 [促成 leader 选举](#促成-Leader-选举) 和 [达成一致](#达成一致) 小节中提到的,ZooKeeper ensemble 中的服务器需要一致性的配置来选举一个 leader 并形成一个 quorum。它们还需要 Zab 协议的一致性配置来保证这个协议在网络中正确的工作。在这次的样例中,我们通过直接将配置写入代码清单中来达到该目的。
+如同在[促成领导者选举](#facilitating-leader-election) 和[达成一致](#achieving-consensus)
+小节中提到的,ZooKeeper ensemble 中的服务器需要一致性的配置来选举一个领导者并形成一个
+quorum。它们还需要 Zab 协议的一致性配置来保证这个协议在网络中正确的工作。
+在这次的示例中,我们通过直接将配置写入代码清单中来达到该目的。
获取 `zk` StatefulSet。
@@ -677,8 +713,8 @@ Get the `zk` StatefulSet.
kubectl get sts zk -o yaml
```
```
-…
-command:
+ ...
+ command:
- sh
- -c
- "start-zookeeper \
@@ -699,14 +735,14 @@ command:
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
-…
+...
```
-
-用于启动 ZooKeeper 服务器的命令将这些配置作为命令行参数传给了 ensemble。你也可以通过环境变量来传入这些配置。
+用于启动 ZooKeeper 服务器的命令将这些配置作为命令行参数传给了 ensemble。
+你也可以通过环境变量来传入这些配置。
+### 配置日志 {#configuring-logging}
-### 配置日志
-
-`zkGenConfig.sh` 脚本产生的一个文件控制了 ZooKeeper 的日志行为。ZooKeeper 使用了 [Log4j](http://logging.apache.org/log4j/2.x/) 并默认使用基于文件大小和时间的滚动文件追加器作为日志配置。
+`zkGenConfig.sh` 脚本产生的一个文件控制了 ZooKeeper 的日志行为。
+ZooKeeper 使用了 [Log4j](http://logging.apache.org/log4j/2.x/) 并默认使用
+基于文件大小和时间的滚动文件追加器作为日志配置。
从 `zk` StatefulSet 的一个 Pod 中获取日志配置。
@@ -732,7 +769,6 @@ kubectl exec zk-0 cat /usr/etc/zookeeper/log4j.properties
The logging configuration below will cause the ZooKeeper process to write all
of its logs to the standard output file stream.
-->
-
下面的日志配置会使 ZooKeeper 进程将其所有的日志写入标志输出文件流中。
```
@@ -753,11 +789,13 @@ standard out and standard error do not exhaust local storage media.
Use [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands/#logs) to retrieve the last 20 log lines from one of the Pods.
-->
+这是在容器里安全记录日志的最简单的方法。
+由于应用的日志被写入标准输出,Kubernetes 将会为你处理日志轮转。
+Kubernetes 还实现了一个智能保存策略,保证写入标准输出和标准错误流
+的应用日志不会耗尽本地存储媒介。
-这是在容器里安全记录日志的最简单的方法。由于应用的日志被写入标准输出,Kubernetes 将会为你处理日志轮转。Kubernetes 还实现了一个智能保存策略,保证写入标准输出和标准错误流的应用日志不会耗尽本地存储媒介。
-
-
-使用 [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands/#logs) 从一个 Pod 中取回最后几行日志。
+使用命令 [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands/#logs)
+从一个 Pod 中取回最后 20 行日志。
```shell
kubectl logs zk-0 --tail 20
@@ -766,7 +804,6 @@ kubectl logs zk-0 --tail 20
-
使用 `kubectl logs` 或者从 Kubernetes Dashboard 可以查看写入到标准输出和标准错误流中的应用日志。
```
@@ -793,18 +830,17 @@ You can view application logs written to standard out or standard error using `k
```
-
-Kubernetes 支持与 [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/) 和 [Elasticsearch and Kibana](/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/) 的整合以获得复杂但更为强大的日志功能。
-对于集群级别的日志输出与整合,可以考虑部署一个 [sidecar](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) 容器。
+Kubernetes 支持与多种日志方案集成。你可以选择一个最适合你的集群和应用
+的日志解决方案。对于集群级别的日志输出与整合,可以考虑部署一个
+[边车容器](/zh/docs/concepts/cluster-administration/logging#sidecar-container-with-logging-agent)
+来轮转和提供日志数据。
-
### 配置非特权用户
-在容器中允许应用以特权用户运行这条最佳实践是值得商讨的。如果你的组织要求应用以非特权用户运行,你可以使用 [SecurityContext](/zh/docs/tasks/configure-pod-container/security-context/) 控制运行容器入口点的用户。
+在容器中允许应用以特权用户运行这条最佳实践是值得商讨的。
+如果你的组织要求应用以非特权用户运行,你可以使用
+[SecurityContext](/zh/docs/tasks/configure-pod-container/security-context/)
+控制运行容器入口点所使用的用户。
`zk` StatefulSet 的 Pod 的 `template` 包含了一个 `SecurityContext`。
@@ -833,8 +871,7 @@ corresponds to the zookeeper group.
Get the ZooKeeper process information from the `zk-0` Pod.
-->
-
-在 Pods 的容器内部,UID 1000 对应用户 zookeeper,GID 1000对应用户组 zookeeper。
+在 Pods 的容器内部,UID 1000 对应用户 zookeeper,GID 1000 对应用户组 zookeeper。
从 `zk-0` Pod 获取 ZooKeeper 进程信息。
@@ -846,8 +883,8 @@ kubectl exec zk-0 -- ps -elf
As the `runAsUser` field of the `securityContext` object is set to 1000,
instead of running as root, the ZooKeeper process runs as the zookeeper user.
-->
-
-由于 `securityContext` 对象的 `runAsUser` 字段被设置为1000而不是 root,ZooKeeper 进程将以 zookeeper 用户运行。
+由于 `securityContext` 对象的 `runAsUser` 字段被设置为 1000 而不是 root,
+ZooKeeper 进程将以 zookeeper 用户运行。
```
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
@@ -860,8 +897,8 @@ By default, when the Pod's PersistentVolumes is mounted to the ZooKeeper server'
Use the command below to get the file permissions of the ZooKeeper data directory on the `zk-0` Pod.
-->
-
-默认情况下,当 Pod 的 PersistentVolume 被挂载到 ZooKeeper 服务器的数据目录时,它只能被 root 用户访问。这个配置将阻止 ZooKeeper 进程写入它的 WAL 及保存快照。
+默认情况下,当 Pod 的 PersistentVolume 被挂载到 ZooKeeper 服务器的数据目录时,
+它只能被 root 用户访问。这个配置将阻止 ZooKeeper 进程写入它的 WAL 及保存快照。
在 `zk-0` Pod 上获取 ZooKeeper 数据目录的文件权限。
@@ -872,8 +909,9 @@ kubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data
-
-由于 `securityContext` 对象的 `fsGroup` 字段设置为1000,Pods 的 PersistentVolumes 的所有权属于 zookeeper 用户组,因而 ZooKeeper 进程能够成功的读写数据。
+由于 `securityContext` 对象的 `fsGroup` 字段设置为 1000,Pods 的
+PersistentVolumes 的所有权属于 zookeeper 用户组,因而 ZooKeeper
+进程能够成功地读写数据。
```
drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data
@@ -890,19 +928,19 @@ common pattern. When deploying an application in Kubernetes, rather than using
an external utility as a supervisory process, you should use Kubernetes as the
watchdog for your application.
-->
-
## 管理 ZooKeeper 进程
-[ZooKeeper documentation](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision) 文档指出“你将需要一个监管程序用于管理每个 ZooKeeper 服务进程(JVM)”。在分布式系统中,使用一个看门狗(监管程序)来重启故障进程是一种常用的模式。
+[ZooKeeper 文档](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_supervision)
+指出“你将需要一个监管程序用于管理每个 ZooKeeper 服务进程(JVM)”。
+在分布式系统中,使用一个看门狗(监管程序)来重启故障进程是一种常用的模式。
-
### 更新 Ensemble
`zk` `StatefulSet` 的更新策略被设置为了 `RollingUpdate`。
@@ -912,6 +950,7 @@ You can use `kubectl patch` to update the number of `cpus` allocated to the serv
```shell
kubectl patch sts zk --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/cpu", "value":"0.3"}]'
```
+
```
statefulset.apps/zk patched
```
@@ -919,12 +958,12 @@ statefulset.apps/zk patched
-
使用 `kubectl rollout status` 观测更新状态。
```shell
kubectl rollout status sts/zk
```
+
```
waiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664...
Waiting for 1 pods to be ready...
@@ -943,8 +982,8 @@ This terminates the Pods, one at a time, in reverse ordinal order, and recreates
Use the `kubectl rollout history` command to view a history or previous configurations.
-->
-
-这项操作会逆序地依次终止每一个 Pod,并用新的配置重新创建。这样做确保了在滚动更新的过程中 quorum 依旧保持工作。
+这项操作会逆序地依次终止每一个 Pod,并用新的配置重新创建。
+这样做确保了在滚动更新的过程中 quorum 依旧保持工作。
使用 `kubectl rollout history` 命令查看历史或先前的配置。
@@ -962,7 +1001,6 @@ REVISION
-
使用 `kubectl rollout undo` 命令撤销这次的改动。
```shell
@@ -974,7 +1012,7 @@ statefulset.apps/zk rolled back
```
-
### 处理进程故障
-[Restart Policies](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) 控制 Kubernetes 如何处理一个 Pod 中容器入口点的进程故障。对于 StatefulSet 中的 Pods 来说,Always 是唯一合适的 RestartPolicy,这也是默认值。你应该**绝不**覆盖 stateful 应用的默认策略。
+[重启策略](/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)
+控制 Kubernetes 如何处理一个 Pod 中容器入口点的进程故障。
+对于 StatefulSet 中的 Pods 来说,Always 是唯一合适的 RestartPolicy,也是默认值。
+你应该**绝不**覆盖有状态应用的默认策略。
检查 `zk-0` Pod 中运行的 ZooKeeper 服务器的进程树。
@@ -999,8 +1039,8 @@ kubectl exec zk-0 -- ps -ef
The command used as the container's entry point has PID 1, and
the ZooKeeper process, a child of the entry point, has PID 27.
-->
-
-作为容器入口点的命令的 PID 为 1,Zookeeper 进程是入口点的子进程,PID 为27。
+作为容器入口点的命令的 PID 为 1,Zookeeper 进程是入口点的子进程,
+PID 为 27。
```
UID PID PPID C STIME TTY TIME CMD
@@ -1011,8 +1051,7 @@ zookeep+ 27 1 0 15:03 ? 00:00:03 /usr/lib/jvm/java-8-openjdk-amd6
-
-在一个终端观察 `zk` StatefulSet 中的 Pods。
+在一个终端观察 `zk` `StatefulSet` 中的 Pods。
```shell
kubectl get pod -w -l app=zk
@@ -1021,7 +1060,6 @@ kubectl get pod -w -l app=zk
-
在另一个终端杀掉 Pod `zk-0` 中的 ZooKeeper 进程。
```shell
@@ -1031,8 +1069,8 @@ In another terminal, terminate the ZooKeeper process in Pod `zk-0` with the foll
-
-ZooKeeper 进程的终结导致了它父进程的终止。由于容器的 RestartPolicy 是 Always,父进程被重启。
+ZooKeeper 进程的终结导致了它父进程的终止。由于容器的 `RestartPolicy`
+是 Always,父进程被重启。
```
NAME READY STATUS RESTARTS AGE
@@ -1051,11 +1089,12 @@ that implements the application's business logic, the script must terminate with
child process. This ensures that Kubernetes will restart the application's
container when the process implementing the application's business logic fails.
-->
-
-如果你的应用使用一个脚本(例如 zkServer.sh)来启动一个实现了应用业务逻辑的进程,这个脚本必须和子进程一起结束。这保证了当实现应用业务逻辑的进程故障时,Kubernetes 会重启这个应用的容器。
+如果你的应用使用一个脚本(例如 `zkServer.sh`)来启动一个实现了应用业务逻辑的进程,
+这个脚本必须和子进程一起结束。这保证了当实现应用业务逻辑的进程故障时,
+Kubernetes 会重启这个应用的容器。
-
### 存活性测试
-你的应用配置为自动重启故障进程,但这对于保持一个分布式系统的健康来说是不够的。许多场景下,一个系统进程可以是活动状态但不响应请求,或者是不健康状态。你应该使用 liveness probes 来通知 Kubernetes 你的应用进程处于不健康状态,需要被重启。
+你的应用配置为自动重启故障进程,但这对于保持一个分布式系统的健康来说是不够的。
+许多场景下,一个系统进程可以是活动状态但不响应请求,或者是不健康状态。
+你应该使用存活性探针来通知 Kubernetes 你的应用进程处于不健康状态,需要被重启。
`zk` StatefulSet 的 Pod 的 `template` 一节指定了一个存活探针。
```yaml
livenessProbe:
- exec:
- command:
- - sh
- - -c
- - "zookeeper-ready 2181"
- initialDelaySeconds: 15
- timeoutSeconds: 5
+ exec:
+ command:
+ - sh
+ - -c
+ - "zookeeper-ready 2181"
+ initialDelaySeconds: 15
+ timeoutSeconds: 5
```
-
-这个探针调用一个简单的 bash 脚本,使用 ZooKeeper 的四字缩写 `ruok` 来测试服务器的健康状态。
+这个探针调用一个简单的 Bash 脚本,使用 ZooKeeper 的四字缩写 `ruok`
+来测试服务器的健康状态。
```
OK=$(echo ruok | nc 127.0.0.1 $1)
@@ -1102,8 +1142,7 @@ fi
-
-在一个终端窗口观察 `zk` StatefulSet 中的 Pods。
+在一个终端窗口中使用下面的命令观察 `zk` StatefulSet 中的 Pods。
```shell
kubectl get pod -w -l app=zk
@@ -1112,7 +1151,6 @@ kubectl get pod -w -l app=zk
-
在另一个窗口中,从 Pod `zk-0` 的文件系统中删除 `zookeeper-ready` 脚本。
```shell
@@ -1124,8 +1162,8 @@ When the liveness probe for the ZooKeeper process fails, Kubernetes will
automatically restart the process for you, ensuring that unhealthy processes in
the ensemble are restarted.
-->
-
-当 ZooKeeper 进程的存活探针探测失败时,Kubernetes 将会为你自动重启这个进程,从而保证 ensemble 中不健康状态的进程都被重启。
+当 ZooKeeper 进程的存活探针探测失败时,Kubernetes 将会为你自动重启这个进程,
+从而保证 ensemble 中不健康状态的进程都被重启。
```shell
kubectl get pod -w -l app=zk
@@ -1143,28 +1181,32 @@ zk-0 1/1 Running 1 1h
```
+### 就绪性测试
+
+就绪不同于存活。如果一个进程是存活的,它是可调度和健康的。
+如果一个进程是就绪的,它应该能够处理输入。存活是就绪的必要非充分条件。
+在许多场景下,特别是初始化和终止过程中,一个进程可以是存活但没有就绪的。
+
+如果你指定了一个就绪探针,Kubernetes 将保证在就绪检查通过之前,
+你的应用不会接收到网络流量。
-### 就绪性测试
-
-就绪不同于存活。如果一个进程是存活的,它是可调度和健康的。如果一个进程是就绪的,它应该能够处理输入。存活是就绪的必要非充分条件。在许多场景下,特别是初始化和终止过程中,一个进程可以是存活但没有就绪的。
-
-如果你指定了一个就绪探针,Kubernetes将保证在就绪检查通过之前,你的应用不会接收到网络流量。
-
-对于一个 ZooKeeper 服务器来说,存活即就绪。因此 `zookeeper.yaml` 清单中的就绪探针和存活探针完全相同。
+对于一个 ZooKeeper 服务器来说,存活即就绪。
+因此 `zookeeper.yaml` 清单中的就绪探针和存活探针完全相同。
```yaml
readinessProbe:
@@ -1182,11 +1224,11 @@ Even though the liveness and readiness probes are identical, it is important
to specify both. This ensures that only healthy servers in the ZooKeeper
ensemble receive network traffic.
-->
-
-虽然存活探针和就绪探针是相同的,但同时指定它们两者仍然重要。这保证了 ZooKeeper ensemble 中只有健康的服务器能接收网络流量。
+虽然存活探针和就绪探针是相同的,但同时指定它们两者仍然重要。
+这保证了 ZooKeeper ensemble 中只有健康的服务器能接收网络流量。
+## 容忍节点故障
+
+ZooKeeper 需要一个 quorum 来提交数据变动。对于一个拥有 3 个服务器的 ensemble 来说,
+必须有两个服务器是健康的,写入才能成功。
+在基于 quorum 的系统里,成员被部署在多个故障域中以保证可用性。
+为了防止由于某台机器断连引起服务中断,最佳实践是防止应用的多个实例在相同的机器上共存。
+
+默认情况下,Kubernetes 可以把 StatefulSet 的 Pods 部署在相同节点上。
+对于你创建的 3 个服务器的 ensemble 来说,如果有两个服务器并存于
+相同的节点上并且该节点发生故障时,ZooKeeper 服务将中断,
+直至至少一个 Pods 被重新调度。
+
-
-## 容忍节点故障
-
-ZooKeeper 需要一个 quorum 来提交数据变动。对于一个拥有 3 个服务器的 ensemble来说,必须有两个服务器是健康的,写入才能成功。在基于 quorum 的系统里,成员被部署在故障域之间以保证可用性。为了防止由于某台机器断连引起服务中断,最佳实践是防止应用的多个示例在相同的机器上共存。
-
-默认情况下,Kubernetes 可以把 StatefulSet 的 Pods 部署在相同节点上。对于你创建的 3 个服务器的 ensemble 来说,如果有两个服务器并存于相同的节点上并且该节点发生故障时,ZooKeeper 服务将中断,直至至少一个 Pods 被重新调度。
-
-你应该总是提供额外的容量以允许关键系统进程在节点故障时能够被重新调度。如果你这样做了,服务故障就只会持续到 Kubernetes 调度器重新调度某个 ZooKeeper 服务器为止。但是,如果希望你的服务在容忍节点故障时无停服时间,你应该设置 `podAntiAffinity`。
+你应该总是提供多余的容量以允许关键系统进程在节点故障时能够被重新调度。
+如果你这样做了,服务故障就只会持续到 Kubernetes 调度器重新调度某个
+ZooKeeper 服务器为止。
+但是,如果希望你的服务在容忍节点故障时无停服时间,你应该设置 `podAntiAffinity`。
获取 `zk` Stateful Set 中的 Pods 的节点。
@@ -1225,8 +1277,7 @@ for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo "";
-
-`zk` StatefulSe 中所有的 Pods 都被部署在不同的节点。
+`zk` `StatefulSet` 中所有的 Pods 都被部署在不同的节点。
```
kubernetes-node-cxpk
@@ -1238,19 +1289,19 @@ kubernetes-node-2g2d
This is because the Pods in the `zk` `StatefulSet` have a `PodAntiAffinity` specified.
-->
-这是因为 `zk` StatefulSet 中的 Pods 指定了 `PodAntiAffinity`。
+这是因为 `zk` `StatefulSet` 中的 Pods 指定了 `PodAntiAffinity`。
```yaml
- affinity:
- podAntiAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- - labelSelector:
- matchExpressions:
- - key: "app"
- operator: In
- values:
- - zk
- topologyKey: "kubernetes.io/hostname"
+affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchExpressions:
+ - key: "app"
+ operator: In
+ values:
+ - zk
+ topologyKey: "kubernetes.io/hostname"
```
-
-`requiredDuringSchedulingIgnoredDuringExecution` 告诉 Kubernetes 调度器,在以 `topologyKey` 指定的域中,绝对不要把带有键为 `app`,值为 `zk` 的标签的两个 Pods 调度到相同的节点。`topologyKey`
-`kubernetes.io/hostname` 表示这个域是一个单独的节点。使用不同的 rules、labels 和 selectors,你能够通过这种技术把你的 ensemble 分布在不同的物理、网络和电力故障域之间。
+`requiredDuringSchedulingIgnoredDuringExecution` 告诉 Kubernetes 调度器,
+在以 `topologyKey` 指定的域中,绝对不要把带有键为 `app`、值为 `zk` 的标签
+的两个 Pods 调度到相同的节点。`topologyKey` `kubernetes.io/hostname` 表示
+这个域是一个单独的节点。
+使用不同的规则、标签和选择算符,你能够通过这种技术把你的 ensemble 分布
+在不同的物理、网络和电力故障域之间。
+## 节点维护期间保持应用可用
-## 存活管理
-
-**在本节中你将会 cordon 和 drain 节点。如果你是在一个共享的集群里使用本教程,请保证不会影响到其他租户**
+**在本节中你将会隔离(Cordon)和腾空(Drain)节点。
+如果你是在一个共享的集群里使用本教程,请保证不会影响到其他租户。**
-上一小节展示了如何在节点之间分散 Pods 以在计划外的节点故障时保证服务存活。但是你也需要为计划内维护引起的临时节点故障做准备。
+上一小节展示了如何在节点之间分散 Pods 以在计划外的节点故障时保证服务存活。
+但是你也需要为计划内维护引起的临时节点故障做准备。
-获取你集群中的节点。
+使用此命令获取你的集群中的节点。
```shell
kubectl get nodes
@@ -1295,7 +1350,8 @@ Use [`kubectl cordon`](/docs/reference/generated/kubectl/kubectl-commands/#cordo
cordon all but four of the nodes in your cluster.
-->
-使用 [`kubectl cordon`](/docs/reference/generated/kubectl/kubectl-commands/#cordon) cordon 你的集群中除4个节点以外的所有节点。
+使用 [`kubectl cordon`](/docs/reference/generated/kubectl/kubectl-commands/#cordon)
+隔离你的集群中除 4 个节点以外的所有节点。
```shell
kubectl cordon
@@ -1304,8 +1360,7 @@ kubectl cordon
-
-获取 `zk-pdb` `PodDisruptionBudget`。
+使用下面的命令获取 `zk-pdb` `PodDisruptionBudget`。
```shell
kubectl get pdb zk-pdb
@@ -1315,8 +1370,8 @@ kubectl get pdb zk-pdb
The `max-unavailable` field indicates to Kubernetes that at most one Pod from
`zk` `StatefulSet` can be unavailable at any time.
-->
-
-`max-unavailable` 字段指示 Kubernetes 在任何时候,`zk` `StatefulSet` 至多有一个 Pod 是不可用的。
+`max-unavailable` 字段指示 Kubernetes 在任何时候,`zk` `StatefulSet`
+至多有一个 Pod 是不可用的。
```
NAME MIN-AVAILABLE MAX-UNAVAILABLE ALLOWED-DISRUPTIONS AGE
@@ -1326,8 +1381,7 @@ zk-pdb N/A 1 1
-
-在一个终端观察 `zk` `StatefulSet` 中的 Pods。
+在一个终端中,使用下面的命令观察 `zk` `StatefulSet` 中的 Pods。
```shell
kubectl get pods -w -l app=zk
@@ -1337,7 +1391,7 @@ kubectl get pods -w -l app=zk
In another terminal, use this command to get the nodes that the Pods are currently scheduled on.
-->
-在另一个终端获取 Pods 当前调度的节点。
+在另一个终端中,使用下面的命令获取 Pods 当前调度的节点。
```shell
for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done
@@ -1354,7 +1408,8 @@ Use [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain)
drain the node on which the `zk-0` Pod is scheduled.
-->
-使用 [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain) 来 cordon 和 drain `zk-0` Pod 调度的节点。
+使用 [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain)
+来隔离和腾空 `zk-0` Pod 调度所在的节点。
```shell
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
@@ -1372,8 +1427,7 @@ node "kubernetes-node-pb41" drained
As there are four nodes in your cluster, `kubectl drain`, succeeds and the
`zk-0` is rescheduled to another node.
-->
-
-由于你的集群中有4个节点, `kubectl drain` 执行成功,`zk-0 被调度到其它节点。
+由于你的集群中有 4 个节点, `kubectl drain` 执行成功,`zk-0` 被调度到其它节点。
```
NAME READY STATUS RESTARTS AGE
@@ -1396,8 +1450,7 @@ zk-0 1/1 Running 0 1m
Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node on which
`zk-1` is scheduled.
-->
-
-在第一个终端持续观察 StatefulSet 的 Pods并 drain `zk-1` 调度的节点。
+在第一个终端中持续观察 StatefulSet 的 Pods 并腾空 `zk-1` 调度所在的节点。
```shell
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-node-ixsl" cordoned
@@ -1413,42 +1466,42 @@ node "kubernetes-node-ixsl" drained
The `zk-1` Pod cannot be scheduled because the `zk` `StatefulSet` contains a `PodAntiAffinity` rule preventing
co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending state.
-->
-
-`zk-1` Pod 不能被调度。由于 `zk` StatefulSet 包含了一个防止 Pods 共存的 PodAntiAffinity 规则,而且只有两个节点可用于调度,这个 Pod 将保持在 Pending 状态。
+`zk-1` Pod 不能被调度,这是因为 `zk` `StatefulSet` 包含了一个防止 Pods
+共存的 PodAntiAffinity 规则,而且只有两个节点可用于调度,
+这个 Pod 将保持在 Pending 状态。
```shell
kubectl get pods -w -l app=zk
```
```
-NAME READY STATUS RESTARTS AGE
-zk-0 1/1 Running 2 1h
-zk-1 1/1 Running 0 1h
-zk-2 1/1 Running 0 1h
-NAME READY STATUS RESTARTS AGE
-zk-0 1/1 Terminating 2 2h
-zk-0 0/1 Terminating 2 2h
-zk-0 0/1 Terminating 2 2h
-zk-0 0/1 Terminating 2 2h
-zk-0 0/1 Pending 0 0s
-zk-0 0/1 Pending 0 0s
-zk-0 0/1 ContainerCreating 0 0s
-zk-0 0/1 Running 0 51s
-zk-0 1/1 Running 0 1m
-zk-1 1/1 Terminating 0 2h
-zk-1 0/1 Terminating 0 2h
-zk-1 0/1 Terminating 0 2h
-zk-1 0/1 Terminating 0 2h
-zk-1 0/1 Pending 0 0s
-zk-1 0/1 Pending 0 0s
+NAME READY STATUS RESTARTS AGE
+zk-0 1/1 Running 2 1h
+zk-1 1/1 Running 0 1h
+zk-2 1/1 Running 0 1h
+NAME READY STATUS RESTARTS AGE
+zk-0 1/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 ContainerCreating 0 0s
+zk-0 0/1 Running 0 51s
+zk-0 1/1 Running 0 1m
+zk-1 1/1 Terminating 0 2h
+zk-1 0/1 Terminating 0 2h
+zk-1 0/1 Terminating 0 2h
+zk-1 0/1 Terminating 0 2h
+zk-1 0/1 Pending 0 0s
+zk-1 0/1 Pending 0 0s
```
-
-继续观察 stateful set 的 Pods 并 drain `zk-2` 调度的节点。
+继续观察 StatefulSet 中的 Pods 并腾空 `zk-2` 调度所在的节点。
```shell
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
@@ -1469,11 +1522,10 @@ You cannot drain the third node because evicting `zk-2` would violate `zk-budget
Use `zkCli.sh` to retrieve the value you entered during the sanity test from `zk-0`.
-->
-
使用 `CRTL-C` 终止 kubectl。
-你不能 drain 第三个节点,因为删除 `zk-2` 将和 `zk-budget` 冲突。然而这个节点仍然保持 cordoned。
-
+你不能腾空第三个节点,因为驱逐 `zk-2` 将和 `zk-budget` 冲突。
+然而这个节点仍然处于隔离状态(Cordoned)。
使用 `zkCli.sh` 从 `zk-0` 取回你的健康检查中输入的数值。
@@ -1484,7 +1536,6 @@ kubectl exec zk-0 zkCli.sh get /hello
-
由于遵守了 PodDisruptionBudget,服务仍然可用。
```
@@ -1506,12 +1557,13 @@ numChildren = 0
-
-使用 [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#uncordon) 来取消对第一个节点的隔离。
+使用 [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#uncordon)
+来取消对第一个节点的隔离。
```shell
kubectl uncordon kubernetes-node-pb41
```
+
```
node "kubernetes-node-pb41" uncordoned
```
@@ -1519,44 +1571,43 @@ node "kubernetes-node-pb41" uncordoned
-
`zk-1` 被重新调度到了这个节点。等待 `zk-1` 变为 Running 和 Ready 状态。
```shell
kubectl get pods -w -l app=zk
```
+
```
-NAME READY STATUS RESTARTS AGE
-zk-0 1/1 Running 2 1h
-zk-1 1/1 Running 0 1h
-zk-2 1/1 Running 0 1h
-NAME READY STATUS RESTARTS AGE
-zk-0 1/1 Terminating 2 2h
-zk-0 0/1 Terminating 2 2h
-zk-0 0/1 Terminating 2 2h
-zk-0 0/1 Terminating 2 2h
-zk-0 0/1 Pending 0 0s
-zk-0 0/1 Pending 0 0s
-zk-0 0/1 ContainerCreating 0 0s
-zk-0 0/1 Running 0 51s
-zk-0 1/1 Running 0 1m
-zk-1 1/1 Terminating 0 2h
-zk-1 0/1 Terminating 0 2h
-zk-1 0/1 Terminating 0 2h
-zk-1 0/1 Terminating 0 2h
-zk-1 0/1 Pending 0 0s
-zk-1 0/1 Pending 0 0s
-zk-1 0/1 Pending 0 12m
-zk-1 0/1 ContainerCreating 0 12m
-zk-1 0/1 Running 0 13m
-zk-1 1/1 Running 0 13m
+NAME READY STATUS RESTARTS AGE
+zk-0 1/1 Running 2 1h
+zk-1 1/1 Running 0 1h
+zk-2 1/1 Running 0 1h
+NAME READY STATUS RESTARTS AGE
+zk-0 1/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Terminating 2 2h
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 Pending 0 0s
+zk-0 0/1 ContainerCreating 0 0s
+zk-0 0/1 Running 0 51s
+zk-0 1/1 Running 0 1m
+zk-1 1/1 Terminating 0 2h
+zk-1 0/1 Terminating 0 2h
+zk-1 0/1 Terminating 0 2h
+zk-1 0/1 Terminating 0 2h
+zk-1 0/1 Pending 0 0s
+zk-1 0/1 Pending 0 0s
+zk-1 0/1 Pending 0 12m
+zk-1 0/1 ContainerCreating 0 12m
+zk-1 0/1 Running 0 13m
+zk-1 1/1 Running 0 13m
```
-
-尝试 drain `zk-2` 调度的节点。
+尝试腾空 `zk-2` 调度所在的节点。
```shell
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
@@ -1565,7 +1616,6 @@ kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-dae
-
输出:
```
@@ -1581,10 +1631,9 @@ This time `kubectl drain` succeeds.
Uncordon the second node to allow `zk-2` to be rescheduled.
-->
-
这次 `kubectl drain` 执行成功。
-Uncordon 第二个节点以允许 `zk-2` 被重新调度。
+取消第二个节点的隔离,以允许 `zk-2` 被重新调度。
```shell
kubectl uncordon kubernetes-node-ixsl
@@ -1600,19 +1649,20 @@ If drain is used to cordon nodes and evict pods prior to taking the node offline
services that express a disruption budget will have that budget respected.
You should always allocate additional capacity for critical services so that their Pods can be immediately rescheduled.
-->
-
-你可以同时使用 `kubectl drain` 和 `PodDisruptionBudgets` 来保证你的服务在维护过程中仍然可用。如果使用了 drain 来隔离节点并在节点离线之前排出了 pods,那么表达了 disruption budget 的服务将会遵守该 budget。你应该总是为关键服务分配额外容量,这样它们的 Pods 就能够迅速的重新调度。
+你可以同时使用 `kubectl drain` 和 `PodDisruptionBudgets` 来保证你的服务
+在维护过程中仍然可用。如果使用了腾空操作来隔离节点并在节点离线之前驱逐了 pods,
+那么设置了干扰预算的服务将会遵守该预算。
+你应该总是为关键服务分配额外容量,这样它们的 Pods 就能够迅速的重新调度。
## {{% heading "cleanup" %}}
-
-
* 使用 `kubectl uncordon` 解除你集群中所有节点的隔离。
-* 你需要删除在本教程中使用的 PersistentVolumes 的持久存储媒介。请遵循必须的步骤,基于你的环境、存储配置和准备方法,保证回收所有的存储。
+* 你需要删除在本教程中使用的 PersistentVolumes 的持久存储媒介。
+ 请遵循必须的步骤,基于你的环境、存储配置和制备方法,保证回收所有的存储。
+
diff --git a/go.mod b/go.mod
index 30c6741140e99..b45ff242a49c8 100644
--- a/go.mod
+++ b/go.mod
@@ -1,34 +1,38 @@
module k8s.io/website
-go 1.14
+go 1.15
require (
- k8s.io/apimachinery v0.18.4
- k8s.io/kubernetes v1.18.4
+ k8s.io/apimachinery v0.20.0
+ k8s.io/kubernetes v1.20.0
)
replace (
- k8s.io/api => k8s.io/api v0.18.4
- k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.18.4
- k8s.io/apimachinery => k8s.io/apimachinery v0.18.4
- k8s.io/apiserver => k8s.io/apiserver v0.18.4
- k8s.io/cli-runtime => k8s.io/cli-runtime v0.18.4
- k8s.io/client-go => k8s.io/client-go v0.18.4
- k8s.io/cloud-provider => k8s.io/cloud-provider v0.18.4
- k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.18.4
- k8s.io/code-generator => k8s.io/code-generator v0.18.4
- k8s.io/component-base => k8s.io/component-base v0.18.4
- k8s.io/cri-api => k8s.io/cri-api v0.18.4
- k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.18.4
- k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.18.4
- k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.18.4
- k8s.io/kube-proxy => k8s.io/kube-proxy v0.18.4
- k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.18.4
- k8s.io/kubectl => k8s.io/kubectl v0.18.4
- k8s.io/kubelet => k8s.io/kubelet v0.18.4
- k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.18.4
- k8s.io/metrics => k8s.io/metrics v0.18.4
- k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.18.4
- k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.18.4
- k8s.io/sample-controller => k8s.io/sample-controller v0.18.4
+ k8s.io/api => k8s.io/api v0.20.0
+ k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.20.0
+ k8s.io/apimachinery => k8s.io/apimachinery v0.20.0
+ k8s.io/apiserver => k8s.io/apiserver v0.20.0
+ k8s.io/cli-runtime => k8s.io/cli-runtime v0.20.0
+ k8s.io/client-go => k8s.io/client-go v0.20.0
+ k8s.io/cloud-provider => k8s.io/cloud-provider v0.20.0
+ k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.20.0
+ k8s.io/code-generator => k8s.io/code-generator v0.20.0
+ k8s.io/component-base => k8s.io/component-base v0.20.0
+ k8s.io/component-helpers => k8s.io/component-helpers v0.20.0
+ k8s.io/controller-manager => k8s.io/controller-manager v0.20.0
+ k8s.io/cri-api => k8s.io/cri-api v0.20.0
+ k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.20.0
+ k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.20.0
+ k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.20.0
+ k8s.io/kube-proxy => k8s.io/kube-proxy v0.20.0
+ k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.20.0
+ k8s.io/kubectl => k8s.io/kubectl v0.20.0
+ k8s.io/kubelet => k8s.io/kubelet v0.20.0
+ k8s.io/kubernetes => k8s.io/kubernetes v1.20.0
+ k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.20.0
+ k8s.io/metrics => k8s.io/metrics v0.20.0
+ k8s.io/mount-utils => k8s.io/mount-utils v0.20.0
+ k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.20.0
+ k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.20.0
+ k8s.io/sample-controller => k8s.io/sample-controller v0.20.0
)
diff --git a/go.sum b/go.sum
index d0f82ad655a36..723fccaf6affa 100644
--- a/go.sum
+++ b/go.sum
@@ -2,25 +2,66 @@ bitbucket.org/bertimus9/systemstat v0.0.0-20180207000608-0eeff89b0690/go.mod h1:
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
+cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
+cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
+cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
+cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
+cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
+cloud.google.com/go v0.51.0/go.mod h1:hWtGJ6gnXH+KgDv+V0zFGDvpi07n3z8ZNj3T1RW0Gcw=
+cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
+cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
+cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
+cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
+cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
+cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
+cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
+cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
+cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk=
+cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
+cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
+cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
+cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
+cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
+cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
+dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/azure-sdk-for-go v35.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
+github.com/Azure/azure-sdk-for-go v43.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
+github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI=
+github.com/Azure/go-autorest/autorest v0.9.6/go.mod h1:/FALq9T/kS7b5J5qsQ+RSTUdAmGFqi0vUdVNNx8q630=
+github.com/Azure/go-autorest/autorest v0.11.1/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw=
github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0=
+github.com/Azure/go-autorest/autorest/adal v0.8.2/go.mod h1:ZjhuQClTqx435SRJ2iMlOxPYt3d2C/T/7TiQCVZSn3Q=
+github.com/Azure/go-autorest/autorest/adal v0.9.0/go.mod h1:/c022QCutn2P7uY+/oQWWNcK9YU+MH96NgK+jErpbcg=
+github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A=
github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA=
+github.com/Azure/go-autorest/autorest/date v0.2.0/go.mod h1:vcORJHLJEh643/Ioh9+vPmf1Ij9AEBM5FuBIXLmIy0g=
+github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
+github.com/Azure/go-autorest/autorest/mocks v0.3.0/go.mod h1:a8FDP3DYzQ4RYfVAxAN3SVSiiO77gL2j2ronKKP0syM=
+github.com/Azure/go-autorest/autorest/mocks v0.4.0/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
+github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/autorest/to v0.2.0/go.mod h1:GunWKJp1AEqgMaGLV+iocmRAJWqST1wQYhyyjXJ3SJc=
github.com/Azure/go-autorest/autorest/validation v0.1.0/go.mod h1:Ha3z/SqBeaalWQvokg3NZAlQTalVMtOIAs1aGK7G6u8=
github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc=
+github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk=
+github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/GoogleCloudPlatform/k8s-cloud-provider v0.0.0-20190822182118-27a4ced34534/go.mod h1:iroGtC8B3tQiqtds1l+mgk/BBOrxbqjH+eUfFQYRc14=
+github.com/GoogleCloudPlatform/k8s-cloud-provider v0.0.0-20200415212048-7901bc822317/go.mod h1:DF8FZRxMHMGv/vP2lQP6h+dYzzjpuRn24VeRiYn3qjQ=
github.com/JeffAshton/win_pdh v0.0.0-20161109143554-76bb4ee9f0ab/go.mod h1:3VYc5hodBMJ5+l/7J4xAyMeuM2PNuepvHlGs8yilUCA=
github.com/MakeNowJust/heredoc v0.0.0-20170808103936-bb23615498cd/go.mod h1:64YHyfSL2R96J44Nlwm39UHepQbyR5q10x7iYa1ks2E=
github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
+github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
+github.com/Microsoft/go-winio v0.4.15/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
github.com/Microsoft/hcsshim v0.0.0-20190417211021-672e52e9209d/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
+github.com/Microsoft/hcsshim v0.8.10-0.20200715222032-5eafd1556990/go.mod h1:ay/0dTb7NsG8QMDfsRfLHgZo/6xAJShLe1+ePPflihk=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
+github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/OpenPeeDeeP/depguard v1.0.0/go.mod h1:7/4sitnI9YlQgTLLk734QlzXT8DuHVnAyztLplQjk+o=
github.com/OpenPeeDeeP/depguard v1.0.1/go.mod h1:xsIw86fROiiwelg+jB2uM9PiKihMMmUx/1V+TNhjQvM=
github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
@@ -33,15 +74,21 @@ github.com/StackExchange/wmi v0.0.0-20180116203802-5d049714c4a6/go.mod h1:3eOhrU
github.com/agnivade/levenshtein v1.0.1/go.mod h1:CURSv5d9Uaml+FovSIICkLbAUZ9S4RqaHDIsdSBg7lM=
github.com/ajstarks/svgo v0.0.0-20180226025133-644b8db467af/go.mod h1:K08gAheRH3/J6wwsYMMT4xOr94bZjxIelGM0+d/wbFw=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
+github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
+github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo29Kk6CurOXKm700vrz8f0KW0JNfpkRJY/8=
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
+github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
+github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/asaskevich/govalidator v0.0.0-20180720115003-f9ffefc3facf/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/auth0/go-jwt-middleware v0.0.0-20170425171159-5493cabe49f7/go.mod h1:LWMyo4iOLWXHGdBki7NIht1kHru/0wM179h+d3g8ATM=
+github.com/aws/aws-sdk-go v1.6.10/go.mod h1:ZRmQr0FajVIyZ4ZzBYKG5P3ZqPz9IHG41ZoMu1ADI3k=
github.com/aws/aws-sdk-go v1.28.2/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
+github.com/aws/aws-sdk-go v1.35.24/go.mod h1:tlPOdRjfxPBpNIwqDj61rmsnA85v9jc0Ps9+muhnW+k=
github.com/bazelbuild/bazel-gazelle v0.18.2/go.mod h1:D0ehMSbS+vesFsLGiD6JXu3mVEzOlfUl8wNnq+x/9p0=
github.com/bazelbuild/bazel-gazelle v0.19.1-0.20191105222053-70208cbdc798/go.mod h1:rPwzNHUqEzngx1iVBfO/2X2npKaT3tqPqqHW6rVsn/A=
github.com/bazelbuild/buildtools v0.0.0-20190731111112-f720930ceb60/go.mod h1:5JP0TXzWDHXv8qvxRC4InIazwdyDseBDbzESUMKk1yU=
@@ -50,32 +97,70 @@ github.com/bazelbuild/rules_go v0.0.0-20190719190356-6dae44dc5cab/go.mod h1:MC23
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0 h1:HWo1m869IqiPhD389kmkxeTalrjNbbJTC8LXupb+sl0=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
+github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
+github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bifurcation/mint v0.0.0-20180715133206-93c51c6ce115/go.mod h1:zVt7zX3K/aDCk9Tj+VM7YymsX66ERvzCJzw8rFCX2JU=
+github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
+github.com/blang/semver v3.1.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/blang/semver v3.5.0+incompatible h1:CGxCgetQ64DKk7rdZ++Vfnb1+ogGNnB17OJKJXD2Cfs=
github.com/blang/semver v3.5.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
+github.com/blang/semver v3.5.1+incompatible h1:cQNTCjp13qL8KC3Nbxr/y2Bqb63oX6wdnnjpJbkM4JQ=
+github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/boltdb/bolt v1.3.1/go.mod h1:clJnj/oiGkjum5o1McbSZDSLxVThjynRyGBgiAx27Ps=
github.com/bradfitz/go-smtpd v0.0.0-20170404230938-deb6d6237625/go.mod h1:HYsPBTaaSFSlLx/70C2HPIMNZpVV8+vt/A+FMnYP11g=
github.com/caddyserver/caddy v1.0.3/go.mod h1:G+ouvOY32gENkJC+jhgl62TyhvqEsFaDiZ4uw0RzP1E=
github.com/cenkalti/backoff v2.1.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/prettybench v0.0.0-20150116022406-03b8cfe5406c/go.mod h1:Xe6ZsFhtM8HrDku0pxJ3/Lr51rwykrzgFwpmTzleatY=
+github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
+github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
+github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
+github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5/go.mod h1:/iP1qXHoty45bqomnu2LM+VVyAEdWN+vtSHGlQgyxbw=
github.com/checkpoint-restore/go-criu v0.0.0-20181120144056-17b0214f6c48/go.mod h1:TrMrLQfeENAPYPRsJuq3jsqdlRh3lvi6trTZJG8+tho=
+github.com/checkpoint-restore/go-criu/v4 v4.0.2/go.mod h1:xUQBLp4RLc5zJtWY++yjOoMoB5lihDt7fai+75m+rGw=
+github.com/checkpoint-restore/go-criu/v4 v4.1.0/go.mod h1:xUQBLp4RLc5zJtWY++yjOoMoB5lihDt7fai+75m+rGw=
github.com/cheekybits/genny v0.0.0-20170328200008-9127e812e1e9/go.mod h1:+tQajlRqAUrPI7DOSpB0XAqZYtQakVtB7wXkRAgjxjQ=
+github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
+github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
+github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/cilium/ebpf v0.0.0-20191025125908-95b36a581eed/go.mod h1:MA5e5Lr8slmEg9bt0VpxxWqJlO4iwu3FBdHUzV7wQVg=
+github.com/cilium/ebpf v0.0.0-20200110133405-4032b1d8aae3/go.mod h1:MA5e5Lr8slmEg9bt0VpxxWqJlO4iwu3FBdHUzV7wQVg=
+github.com/cilium/ebpf v0.0.0-20200507155900-a9f01edf17e3/go.mod h1:XT+cAw5wfvsodedcijoh1l9cf7v1x9FlFB/3VmF/O8s=
+github.com/cilium/ebpf v0.0.0-20200601085316-9f1617e5c574/go.mod h1:XT+cAw5wfvsodedcijoh1l9cf7v1x9FlFB/3VmF/O8s=
+github.com/cilium/ebpf v0.0.0-20200702112145-1c8d4c9ef775/go.mod h1:7cR51M8ViRLIdUjrmSXlK9pkrsDlLHbO8jiB8X8JnOc=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/clusterhq/flocker-go v0.0.0-20160920122132-2b8b7259d313/go.mod h1:P1wt9Z3DP8O6W3rvwCt0REIlshg1InHImaLW0t3ObY0=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa h1:OaNxuTZr7kxeODyLWsRMC+OD03aFUH+mW6r2d+MWa5Y=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/codegangsta/negroni v1.0.0/go.mod h1:v0y3T5G7Y1UlFfyxFn/QLRU4a2EuNau2iZY63YTKWo0=
github.com/container-storage-interface/spec v1.2.0/go.mod h1:6URME8mwIBbpVyZV93Ce5St17xBiQJQY67NDsuohiy4=
+github.com/containerd/cgroups v0.0.0-20200531161412-0dbf7f05ba59/go.mod h1:pA0z1pT8KYB3TCXK/ocprsh7MAkoW8bZVzPdih9snmM=
github.com/containerd/console v0.0.0-20170925154832-84eeaae905fa/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
+github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
+github.com/containerd/console v1.0.0/go.mod h1:8Pf4gM6VEbTNRIT26AyyU7hxdQU3MvAvxVI0sc00XBE=
github.com/containerd/containerd v1.0.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.3/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.1/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
+github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
+github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
+github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
+github.com/containerd/ttrpc v1.0.0/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
+github.com/containerd/ttrpc v1.0.2/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
+github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
github.com/containerd/typeurl v0.0.0-20190228175220-2a93cfde8c20/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
+github.com/containerd/typeurl v1.0.0/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
+github.com/containerd/typeurl v1.0.1/go.mod h1:TB1hUtrpaiO88KEK56ijojHS1+NeF0izUACaJW2mdXg=
github.com/containernetworking/cni v0.7.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
+github.com/containernetworking/cni v0.8.0/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
github.com/coredns/corefile-migration v1.0.6/go.mod h1:OFwBp/Wc9dJt5cAZzHWMNhK1r5L0p0jDwIBc6j8NC8E=
+github.com/coredns/corefile-migration v1.0.10/go.mod h1:RMy/mXdeDlYwzt0vdMEJvT2hGJ2I86/eO0UdXmH9XNI=
+github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
+github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
@@ -85,10 +170,16 @@ github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7
github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e h1:Wf6HqHfScWJN9/ZjdUKyjop4mf3Qdd+1TvvltAvM3m8=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
+github.com/coreos/go-systemd/v22 v22.0.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
+github.com/coreos/go-systemd/v22 v22.1.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/coreos/pkg v0.0.0-20180108230652-97fdf19511ea h1:n2Ltr3SrfQlf/9nOna1DoGKxLx3qTSI8Ttl6Xrqp6mw=
github.com/coreos/pkg v0.0.0-20180108230652-97fdf19511ea/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
+github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f h1:lBNOc5arjvs8E5mO2tbpBpLoyyu8B6e44T7hJy6potg=
+github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
+github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
+github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/cyphar/filepath-securejoin v0.2.2/go.mod h1:FpkQEhXnPnOthhzymB7CGsFk2G9VLXONKD9G7QGMM+4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -97,14 +188,20 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/daviddengcn/go-colortext v0.0.0-20160507010035-511bcaf42ccd/go.mod h1:dv4zxwHi5C/8AeI+4gX4dCWOIvNi7I6JCSX0HvlKPgE=
github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
+github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
+github.com/docker/distribution v2.7.1-0.20190205005809-0d3efadf0154+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug=
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v0.7.3-0.20190327010347-be7ac8be2ae0/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/docker v1.4.2-0.20200309214505-aa6a9891b09c/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/docker v17.12.0-ce-rc1.0.20200916142827-bd33bbf0497b+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.3.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
+github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
+github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
@@ -115,13 +212,17 @@ github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.m
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/euank/go-kmsg-parser v2.0.0+incompatible/go.mod h1:MhmAMZ8V4CYH4ybgdRwPr2TU5ThnS43puaKEMpja1uw=
github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
+github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d/go.mod h1:ZZMPRZwes7CROmyNKgQzC3XPs6L/G2EJLHddWejkmf4=
github.com/fatih/camelcase v1.0.0/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwoZc+/fpc=
github.com/fatih/color v1.6.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
github.com/fogleman/gg v1.2.1-0.20190220221249-0403632d5b90/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzPa1k=
+github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
+github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
+github.com/fvbommel/sortorder v1.0.1/go.mod h1:uk88iVf1ovNn1iLfgUVU2F9o5eO30ui720w+kxuqRs0=
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/gliderlabs/ssh v0.1.1/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0=
@@ -130,10 +231,18 @@ github.com/globalsign/mgo v0.0.0-20181015135952-eeefdecb41b8/go.mod h1:xkRDCp4j0
github.com/go-acme/lego v2.5.0+incompatible/go.mod h1:yzMNe9CasVUhkquNvti5nAtPmG94USbYxYrZfTkIn0M=
github.com/go-bindata/go-bindata v3.1.1+incompatible/go.mod h1:xK8Dsgwmeed+BBsSy2XTopBn/8uK2HWuGSnA11C3Joo=
github.com/go-critic/go-critic v0.3.5-0.20190526074819-1df300866540/go.mod h1:+sE8vrLDS2M0pZkBk0wy6+nLdKexVDrl/jBqQOTDThA=
+github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
+github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
+github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
+github.com/go-ini/ini v1.9.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
+github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-lintpack/lintpack v0.5.2/go.mod h1:NwZuYi2nUHho8XEIZ6SIxihrnPoqBTDqfpXvXAN0sXM=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
+github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
+github.com/go-logr/logr v0.2.0 h1:QvGt2nLcHH0WK9orKa+ppBPAxREcH364nPUedEpK0TY=
+github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-ole/go-ole v1.2.1/go.mod h1:7FAglXiTm7HKlQRDeOQ6ZNUHidzCWXuZWq/1dTyBNF8=
github.com/go-openapi/analysis v0.0.0-20180825180245-b006789cd277/go.mod h1:k70tL6pCuVxPJOHXQ+wIac1FUrvNkHolPie/cLEU6hI=
github.com/go-openapi/analysis v0.17.0/go.mod h1:IowGgpVeD0vNm45So8nr+IcQ3pxVtpRoBWb8PVZO0ik=
@@ -195,6 +304,7 @@ github.com/go-toolsmith/strparse v1.0.0/go.mod h1:YI2nUKP9YGZnL/L1/DLFBfixrcjslW
github.com/go-toolsmith/typep v1.0.0/go.mod h1:JSQCQMUPdRlMZFswiq3TGpNp1GMktqkR2Ns5AIQkATU=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/godbus/dbus v0.0.0-20181101234600-2ff6f7ffd60f/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
+github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
@@ -204,15 +314,34 @@ github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekf
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903 h1:LbsanbbD6LieFkXbj9YNNBupiGHJgFeLpO0j0Fza1h8=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.0.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
+github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
+github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/protobuf v0.0.0-20161109072736-4bd1920723d7/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
+github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
+github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
+github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
+github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
+github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
+github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
+github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
+github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
+github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
+github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
+github.com/golang/protobuf v1.4.3 h1:JjCZWpVbqXDqFVmTfYWEVTMIYrL/NPdPSCHPJ0T/raM=
+github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2/go.mod h1:k9Qvh+8juN+UKMCS/3jFtGICgW8O96FVaZsaxdzDkR4=
github.com/golangci/dupl v0.0.0-20180902072040-3e9179ac440a/go.mod h1:ryS0uhF+x9jgbj/N71xsEqODy9BN81/GonCZiOzirOk=
github.com/golangci/errcheck v0.0.0-20181223084120-ef45e06d44b6/go.mod h1:DbHgvLiFKX1Sh2T1w8Q/h4NAI8MHIpzCdnBUDTXU3I0=
@@ -237,9 +366,17 @@ github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Z
github.com/google/btree v1.0.0 h1:0udJVsspx3VBr5FwtLhQQtuAsVc79tTq0ocGIPAU6qo=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/cadvisor v0.35.0/go.mod h1:1nql6U13uTHaLYB8rLS5x9IJc2qT6Xd/Tr1sTX6NE48=
+github.com/google/cadvisor v0.37.0/go.mod h1:OhDE+goNVel0eGY8mR7Ifq1QUI1in5vJBIgIpcajK/I=
+github.com/google/cadvisor v0.38.5/go.mod h1:1OFB9sOOMkBdUBGCO/1SArawTnDscgMzTodacVDe8mA=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
+github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.2 h1:X2ev0eStA3AbceY54o37/0PQ/UWqKEiiO2dKL5OPaFM=
+github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ=
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
@@ -247,56 +384,95 @@ github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/google/uuid v1.1.2 h1:EVhdT+1Kseyi1/pUmXKaFxYsDNy9RQYkMWRH68J/W7Y=
+github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
+github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/googleapis/gnostic v0.1.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
+github.com/googleapis/gnostic v0.4.1 h1:DLJCy1n/vrD4HPjOvYcT8aYQXpPIzoRZONaYwyycI+I=
+github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg=
github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.7.0/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
+github.com/gorilla/mux v1.7.3/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
+github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.0 h1:WDFjx/TMzVgy9VdMMQi2K2Emtwi2QcUQsztZ/zLaH/Q=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
+github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gostaticanalysis/analysisutil v0.0.0-20190318220348-4088753ea4d3/go.mod h1:eEOZF4jCKGi+aprrirO9e7WKB3beBRtWgqGunKl6pKE=
github.com/gostaticanalysis/analysisutil v0.0.3/go.mod h1:eEOZF4jCKGi+aprrirO9e7WKB3beBRtWgqGunKl6pKE=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
+github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4 h1:z53tR0945TRRQO/fLEVPI6SMv7ZflF0TEaTAoU7tOzg=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 h1:Ovs26xHkKqVztRpIrF/92BcuyuQ/YW4NSIpoGtfXNho=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
+github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.9.5 h1:UImYN5qQ8tuGpGE16ZmjvcTtTw24zw1QAp/SlnNrZhI=
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
+github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
+github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
+github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
+github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
+github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
+github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
+github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
+github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
+github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
+github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
+github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
+github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90=
github.com/hashicorp/golang-lru v0.0.0-20180201235237-0fb14efe8c47/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/hcl v0.0.0-20180404174102-ef8a98b0bbce/go.mod h1:oZtUIOe8dh44I2q6ScRibXws4Ajl+d+nod3AaR9vL5w=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
+github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
+github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
+github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
+github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
github.com/heketi/heketi v9.0.1-0.20190917153846-c2e2a4ab7ab9+incompatible/go.mod h1:bB9ly3RchcQqsQ9CpyaQwvva7RS5ytVoSoholZQON6o=
github.com/heketi/tests v0.0.0-20151005000721-f3775cbcefd6/go.mod h1:xGMAM8JLi7UkZt1i4FQeQy0R2T8GLUwQhOP5M1gBhy4=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
+github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5 h1:JboBksRwiiAJWvIYJVo46AfV+IAIKZpfrSzVKj42R4Q=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
+github.com/ishidawataru/sctp v0.0.0-20190723014705-7c296d48a2b5/go.mod h1:DM4VvS+hD/kDi1U1QsX2fnZowwBhqD0Dk3bRPKF/Oc8=
github.com/jellevandenhooff/dkim v0.0.0-20150330215556-f50fe3d243e1/go.mod h1:E0B/fFc00Y+Rasa88328GlI/XbtyysCtTHZS8h7IrBU=
github.com/jimstudt/http-authentication v0.0.0-20140401203705-3eca13d6893a/go.mod h1:wK6yTYYcgjHE1Z1QtXACPDjcFJyBskHEdagmnq3vsP8=
+github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
+github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
+github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/jonboulle/clockwork v0.1.0 h1:VKV+ZcuP6l3yW9doeqz6ziZGgcynBVQO+obU0+0hcPo=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.8 h1:QiWkFLKq0T7mpzwOTu6BzNDbfTE8OLrYhVKYMLF46Ok=
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
+github.com/json-iterator/go v1.1.10 h1:Kz6Cvnvv2wGdaG/V8yMvfkmNiXq9Ya2KUv4rouJJr68=
+github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
+github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes=
github.com/karrick/godirwalk v1.7.5/go.mod h1:2c9FRhkDxdIbgkOnCEvnSWs71Bhugbl46shStcFDJ34=
+github.com/karrick/godirwalk v1.16.1/go.mod h1:j4mkqPuvaLI8mp1DroR3P6ad7cyYd4c1qeJ3RV7ULlk=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/gotool v0.0.0-20161130080628-0de1eaf82fa3/go.mod h1:jxZFDH7ILpTPQTk+E2s+z4CUas9lVNjIuKR4c5/zKgM=
@@ -307,9 +483,12 @@ github.com/klauspost/cpuid v0.0.0-20180405133222-e7e905edc00e/go.mod h1:Pj4uuM52
github.com/klauspost/cpuid v1.2.0/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
+github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
+github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
+github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.3/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
@@ -344,18 +523,30 @@ github.com/mattn/go-shellwords v1.0.5/go.mod h1:3xCvwCdWdlDJUrvuMn7Wuy9eWs4pE8vq
github.com/mattn/goveralls v0.0.2/go.mod h1:8d1ZMHsd7fW6IRPKQh46F2WRpyib5/X4FOpevwGNQEw=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
+github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 h1:I0XW9+e1XWDxdcEniV4rQAIOPUGDq67JSCiRCgGCZLI=
+github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/mesos/mesos-go v0.0.9/go.mod h1:kPYCMQ9gsOXVAle1OsoY4I1+9kPu8GHkf88aV59fDr4=
github.com/mholt/certmagic v0.6.2-0.20190624175158-6a42ef9fe8c2/go.mod h1:g4cOPxcjV0oFq3qwpjSA30LReKD8AoIfwAY9VvG35NY=
+github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/miekg/dns v1.1.3/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/miekg/dns v1.1.4/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/mindprince/gonvml v0.0.0-20190828220739-9ebdce4bb989/go.mod h1:2eu9pRWp8mo84xCg6KswZ+USQHjwgRhNp06sozOdsTY=
github.com/mistifyio/go-zfs v2.1.1+incompatible/go.mod h1:8AuVvqP/mXw1px98n46wfvcGfQ4ci2FwoAjKYxuo3Z4=
+github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible/go.mod h1:8AuVvqP/mXw1px98n46wfvcGfQ4ci2FwoAjKYxuo3Z4=
+github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-ps v0.0.0-20170309133038-4fdf99ab2936/go.mod h1:r1VsdOzOPt1ZSrGZWFoNhsAedKnEd6r9Np1+5blZCWk=
+github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
+github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
+github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
+github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v0.0.0-20180220230111-00c29f56e238/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
+github.com/moby/ipvs v1.0.1/go.mod h1:2pngiyseZbIKXNv7hsKj3O9UEz30c53MT9005gt2hxQ=
+github.com/moby/sys/mountinfo v0.1.3/go.mod h1:w2t2Avltqx8vE7gX5l+QiBKxODu2TX0+Syr3h52Tw4o=
+github.com/moby/term v0.0.0-20200312100748-672ec06f55cd/go.mod h1:DdlQx2hp0Ss5/fLikoLlEeIYiATotOjgB//nb973jeo=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -366,6 +557,7 @@ github.com/mohae/deepcopy v0.0.0-20170603005431-491d3605edfb/go.mod h1:TaXosZuwd
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/mozilla/tls-observatory v0.0.0-20180409132520-8791a200eb40/go.mod h1:SrKMQvPiws7F7iqYp8/TX+IhxCYhzr6N/1yb8cwHsGk=
github.com/mrunalp/fileutils v0.0.0-20171103030105-7d4729fb3618/go.mod h1:x8F1gnqOkIEiO4rqoeEEEqQbo7HjGMTvyoq3gej4iT0=
+github.com/mrunalp/fileutils v0.0.0-20200520151820-abd8a0e76976/go.mod h1:x8F1gnqOkIEiO4rqoeEEEqQbo7HjGMTvyoq3gej4iT0=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mvdan/xurls v1.1.0/go.mod h1:tQlNn3BED8bE/15hnSL2HLkDeLWpNPAwtw7wkEq44oU=
@@ -375,6 +567,7 @@ github.com/naoina/go-stringutil v0.1.0/go.mod h1:XJ2SJL9jCtBh+P9q5btrd/Ylo8XwT/h
github.com/naoina/toml v0.1.1/go.mod h1:NBIhNtsFMo3G2szEBne+bO4gS192HuIYRqfvOWb4i1E=
github.com/nbutton23/zxcvbn-go v0.0.0-20160627004424-a22cb81b2ecd/go.mod h1:o96djdrsSGy3AWPyBgZMAGfxZNfgntdJG+11KU4QvbU=
github.com/nbutton23/zxcvbn-go v0.0.0-20171102151520-eafdab6b0663/go.mod h1:o96djdrsSGy3AWPyBgZMAGfxZNfgntdJG+11KU4QvbU=
+github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
@@ -385,12 +578,26 @@ github.com/onsi/gomega v1.4.2/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1Cpa
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
+github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v1.0.0-rc1 h1:WzifXhOVOEOuFYOJAW6aQqW0TooG2iki3E3Ii+WN7gQ=
github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
+github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
+github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v1.0.0-rc10/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v1.0.0-rc90.0.20200616040943-82d2fa4eb069/go.mod h1:3Sm6Dt7OT8z88EbdQqqcRN2oCT54jbi72tT/HqgflT8=
+github.com/opencontainers/runc v1.0.0-rc91.0.20200707015106-819fcc687efb/go.mod h1:ZuXhqlr4EiRYgDrBDNfSbE4+n9JX4+V107NwAmF7sZA=
+github.com/opencontainers/runc v1.0.0-rc92/go.mod h1:X1zlU4p7wOlX4+WRCz+hvlRv8phdL7UqbYD+vQwNMmE=
github.com/opencontainers/runtime-spec v1.0.0/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.3-0.20200520003142-237cc4f519e2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.3-0.20200728170252-4d89ac9fbff6/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.3.1-0.20190929122143-5215b1806f52/go.mod h1:+BLncwf63G4dgOzykXAxcmnFlUaOlkDdmw/CqsW6pjs=
+github.com/opencontainers/selinux v1.5.1/go.mod h1:yTcKuYAh6R95iDpefGLQaPaRwJFwyzAJufJyiTt7s0g=
+github.com/opencontainers/selinux v1.5.2/go.mod h1:yTcKuYAh6R95iDpefGLQaPaRwJFwyzAJufJyiTt7s0g=
+github.com/opencontainers/selinux v1.6.0/go.mod h1:VVGKuOLlE7v4PJyT6h7mNWvq1rzqiriPsEqVhc+svHE=
+github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k=
github.com/pelletier/go-toml v1.1.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
@@ -398,25 +605,43 @@ github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
github.com/pquerna/ffjson v0.0.0-20180717144149-af8b230fcd20/go.mod h1:YARuvh7BUWHNhzDq2OM5tzR2RiCcN2D7sapiKyCel/M=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
+github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0 h1:vrDKnkGzuGvhNAL56c7DBz29ZL+KxnoR0x7enabFceM=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
+github.com/prometheus/client_golang v1.7.1 h1:NTGy1Ja9pByO+xAeH/qiWnLrKtr3hJPNjaVUwnjpdpA=
+github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
+github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
+github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1 h1:K0MGApIoQvMw27RTdJkPbr3JZ7DNbtxQNyi5STVM6Kw=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
+github.com/prometheus/common v0.10.0 h1:RyRA7RzGXQZiW+tGMr7sxa85G1z0yOpM1qq5c8lNawc=
+github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
+github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
+github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
+github.com/prometheus/procfs v0.0.0-20190522114515-bc1a522cf7b1/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2 h1:6LJUbpNm42llc4HRCuvApCSWB/WfhuNo9K98Q9sNGfs=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
+github.com/prometheus/procfs v0.1.3 h1:F0+tqvhOksq22sc6iCHF5WGlWjdwj92p0udFh1VFBS8=
+github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
+github.com/prometheus/procfs v0.2.0 h1:wH4vA7pcjKuZzjF7lM8awk4fnuJO6idemZXoKnULUx4=
+github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
+github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/quasilyte/go-consistent v0.0.0-20190521200055-c6f3937de18c/go.mod h1:5STLWrekHfjyYwxBRVRXNOSewLJ3PWfDJd1VyTS21fI=
github.com/quobyte/api v0.1.2/go.mod h1:jL7lIHrmqQ7yh05OJ+eEEdHr0u/kmT1Ff9iHd+4H6VI=
+github.com/quobyte/api v0.1.8/go.mod h1:jL7lIHrmqQ7yh05OJ+eEEdHr0u/kmT1Ff9iHd+4H6VI=
github.com/remyoudompheng/bigfft v0.0.0-20170806203942-52369c62f446/go.mod h1:uYEyJGbgTkfkS4+E/PavXkNJcbFIpEtjt2B0KDQ5+9M=
github.com/robfig/cron v1.1.0 h1:jk4/Hud3TTdcrJgUOBgsqrZBarcxl6ADIjSC2iniwLY=
github.com/robfig/cron v1.1.0/go.mod h1:JGuDeoQd7Z6yL4zQhZ3OPEVHB7fL6Ka6skscFHfmt2k=
@@ -424,27 +649,34 @@ github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6So
github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rubiojr/go-vhd v0.0.0-20160810183302-0bfd3b39853c/go.mod h1:DM5xW0nvfNNm2uytzsvhI3OnX8uzaRAg8UX/CnDqbto=
+github.com/rubiojr/go-vhd v0.0.0-20200706105327-02e210299021/go.mod h1:DM5xW0nvfNNm2uytzsvhI3OnX8uzaRAg8UX/CnDqbto=
github.com/russross/blackfriday v0.0.0-20170610170232-067529f716f4/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
+github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
+github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/ryanuber/go-glob v0.0.0-20170128012129-256dc444b735/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIHxIXzX/Yc=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
+github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/seccomp/libseccomp-golang v0.9.1/go.mod h1:GbW5+tmTXfcxTToHLXlScSlAvWlF4P2Ca7zGrPiEpWo=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/shirou/gopsutil v0.0.0-20180427012116-c95755e4bcd7/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
github.com/shirou/w32 v0.0.0-20160930032740-bb4de0191aa4/go.mod h1:qsXQc7+bwAM3Q1u/4XEfrquwF8Lw7D7y5cD8CuHnfIc=
github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk=
github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ=
+github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.0.5/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.0.6/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/sirupsen/logrus v1.4.2 h1:SPIRibHv4MatM3XXNO2BJeFLZwZ2LvZgfQ5+UNI2im4=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
+github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4 h1:0HKaf1o97UwFjHH9o5XsHUOF+tqmdA7KEzXLpiyaw0E=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/sourcegraph/go-diff v0.5.1/go.mod h1:j2dHj3m8aZgQO8lMTcTnBcXkRRRqi34cd2MNlA9u1mE=
+github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.0/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
@@ -454,6 +686,8 @@ github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkU
github.com/spf13/cobra v0.0.2/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
+github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
+github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI=
github.com/spf13/jwalterweatherman v0.0.0-20180109140146-7c0cea34c8ec/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
@@ -464,7 +698,10 @@ github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.0.2/go.mod h1:A8kyI5cUJhb8N+3pkfONlcEcZbueH6nhAm0Fq7SrnBM=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
+github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
+github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
github.com/storageos/go-api v0.0.0-20180912212459-343b3eff91fc/go.mod h1:ZrLn+e0ZuF3Y65PNF6dIwbJPZqfmtCXxFm9ckv0agOY=
+github.com/storageos/go-api v2.2.0+incompatible/go.mod h1:ZrLn+e0ZuF3Y65PNF6dIwbJPZqfmtCXxFm9ckv0agOY=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
@@ -472,6 +709,8 @@ github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXf
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
+github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
+github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/tarm/serial v0.0.0-20180830185346-98f6abe2eb07/go.mod h1:kDXzergiv9cbyO7IOYJZWg1U88JhDg3PB6klq9Hg2pA=
github.com/thecodeteam/goscaleio v0.1.0/go.mod h1:68sdkZAsK8bvEwBlbQnlLS+xU+hvLYM/iQ8KXej1AwM=
@@ -479,10 +718,14 @@ github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhV
github.com/timakin/bodyclose v0.0.0-20190721030226-87058b9bfcec/go.mod h1:Qimiffbc6q9tBWlVV6x0P9sat/ao1xEkREYPPj9hphk=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8 h1:ndzgwNDnKIqyCvHTXaCqh9KlOWKvBry6nuXMJmonVsE=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
+github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
+github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/ultraware/funlen v0.0.1/go.mod h1:Dp4UiAus7Wdb9KUZsYWZEWiRzGuM2kXM1lPbfaF6xhA=
github.com/ultraware/funlen v0.0.2/go.mod h1:Dp4UiAus7Wdb9KUZsYWZEWiRzGuM2kXM1lPbfaF6xhA=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
+github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
+github.com/urfave/cli v1.22.2/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/negroni v1.0.0/go.mod h1:Meg73S6kFm/4PpbYdq35yYWoCZ9mS/YSx+lKnmiohz4=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.2.0/go.mod h1:4vX61m6KN+xDduDNwXrhIAVZaZaZiQ1luJk8LWSxF3s=
@@ -490,22 +733,39 @@ github.com/valyala/quicktemplate v1.1.1/go.mod h1:EH+4AkTd43SvgIbQHYu59/cJyxDoOV
github.com/valyala/tcplisten v0.0.0-20161114210144-ceec8f93295a/go.mod h1:v3UYOV9WzVtRmSR+PDvWpU/qWl4Wa5LApYYX4ZtKbio=
github.com/vektah/gqlparser v1.1.2/go.mod h1:1ycwN7Ij5njmMkPPAOaRFY4rET2Enx7IkVv3vaXspKw=
github.com/vishvananda/netlink v1.0.0/go.mod h1:+SR5DhBJrl6ZM7CoCKvpw5BKroDKQ+PJqOg65H/2ktk=
+github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netns v0.0.0-20171111001504-be1fbeda1936/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI=
+github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
+github.com/vishvananda/netns v0.0.0-20200520041808-52d707b772fe/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
+github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
github.com/vmware/govmomi v0.20.3/go.mod h1:URlwyTFZX72RmxtxuaFL2Uj3fD1JTvZdx59bHWk6aFU=
+github.com/willf/bitset v1.1.11-0.20200630133818-d5bec3311243/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 h1:eY9dn8+vbi4tKz5Qo6v2eYzo7kUS51QINcR5jNpbZS8=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xlab/handysort v0.0.0-20150421192137-fb3537ed64a1/go.mod h1:QcJo0QPSfTONNIgpN5RA8prR7fF8nkF6cTWTcNerRO8=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
+github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.3 h1:MUGmc65QhB3pIlaQ5bB4LwqSj6GIonVJXpZiaKNyaKk=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
+go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738 h1:VcrIfasaLFkyjk6KNlXQSzO+B0fZcnECiDrKJsfxka0=
go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738/go.mod h1:dnLIgRNXwCJa5e+c6mIZCrds/GIG4ncV9HhK5PX7jPg=
+go.etcd.io/etcd v0.5.0-alpha.5.0.20200819165624-17cef6e3e9d5 h1:Gqga3zA9tdAcfqobUGjSoCob5L3f8Dt5EuOp3ihNZko=
+go.etcd.io/etcd v0.5.0-alpha.5.0.20200819165624-17cef6e3e9d5/go.mod h1:skWido08r9w6Lq/w70DO5XYIKMu4QFu1+4VsqLQuJy8=
+go.etcd.io/etcd v0.5.0-alpha.5.0.20200910180754-dd1b699fc489 h1:1JFLBqwIgdyHN1ZtgjTBwO+blA6gVOmZurpiMEsETKo=
+go.etcd.io/etcd v0.5.0-alpha.5.0.20200910180754-dd1b699fc489/go.mod h1:yVHk9ub3CSBatqGNg7GRmsnfLWtoW60w4eDYfh7vHDg=
go.mongodb.org/mongo-driver v1.0.3/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.1.1/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.1.2/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
+go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
+go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
+go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.uber.org/atomic v1.3.2 h1:2Oa65PReHzfn29GpvgsYwloV9AVFHPDk8tYxt2c2tr4=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
+go.uber.org/atomic v1.4.0 h1:cxzIVoETapQEqDhQu3QfnvXAV4AlzcvUCxkVUFw3+EU=
+go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/multierr v1.1.0 h1:HoEmRHQPVSqub6w2z2d2EOVs2fjyFRGyofhKuyDq0QI=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0 h1:ORx85nbTijNz8ljznvCMR1ZBIPKFn3jQrag10X2AsuM=
@@ -514,6 +774,7 @@ go4.org v0.0.0-20180809161055-417644f6feb5/go.mod h1:MkTOUMDaeVYJUOUsaDXIhWPZYa1
golang.org/x/build v0.0.0-20190927031335-2835ba2e683f/go.mod h1:fYw7AShPAhGMdXqA9gRadk/CcMsvLlClpE5oBwnS3dM=
golang.org/x/crypto v0.0.0-20180426230345-b49d69b5da94/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
+golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190123085648-057139ce5d2b/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
@@ -523,24 +784,52 @@ golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a/go.mod h1:djNgcEr1/C05ACk
golang.org/x/crypto v0.0.0-20190320223903-b7391e95e576/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190424203555-c05e17bb3b2d/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190617133340-57b3e21c3d56/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975 h1:/Tl7pH94bvbAAHBdZJT947M/+gp0+CqQXDtMRC0fseo=
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 h1:psW17arqaxU48Z5kZ0CQnkZWQJsqcURM6tKiBApRjXI=
+golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0 h1:hb9wdF1z5waM+dSIICn1l0DkLVDT3hqhhQsDNUmHPRE=
+golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190125153040-c74c464bbbf2/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190312203227-4b39c73a6495/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
+golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
+golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
+golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
+golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
+golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
+golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
+golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
+golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/image v0.0.0-20180708004352-c73c2afc3b81/go.mod h1:ux5Hcp/YLpHSI86hEcLt0YII63i6oz57MZXIpbrjZUs=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
+golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
+golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
+golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
+golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
+golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
+golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
+golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
+golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20170915142106-8351a756f30f/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -548,8 +837,10 @@ golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73r
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180911220305-26e67e76b6c3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181102091132-c10e9556a7bc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190125091013-d26f9f9a57f3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -558,19 +849,38 @@ golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190320064053-1272bf9dcd53/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190328230028-74de082e2cca/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190502183928-7f726cade0ab/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
+golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191004110552-13f9640d40b9 h1:rjwSpXsdiK0dV8/Naq3kAw9ymfAeJIyd0upUIElB+lI=
golang.org/x/net v0.0.0-20191004110552-13f9640d40b9/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200707034311-ab3426394381 h1:VXak5I6aEWmAXeQjA+QSZzlgNrpq9mjcfDemuexIKsU=
+golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
+golang.org/x/net v0.0.0-20201110031124-69a78807bb2b h1:uwuIcX0g4Yl1NC5XAz37xsr2lTtcqevgzYNVt49waME=
+golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6 h1:pE8b58s1HRDMi8RDc79m0HISf9D4TzseP40cEA6IGfs=
+golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d h1:TzXSXBo42m9gQenoE3b9BGiEpg5IG2JkU5FkPIawgtw=
+golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/perf v0.0.0-20180704124530-6e6d33e29852/go.mod h1:JLpeXjPJfIyPr5TlbXLkXWLhP8nz10XfvxElABhCtcw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -580,9 +890,11 @@ golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20170830134202-bb24a47a89ea/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20171026204733-164713f0dfce/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -595,23 +907,65 @@ golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190321052220-f7bb7a8bee54/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502175342-a43fa875dd82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7 h1:HmbHVPwrPEKPGLAcHSrMe6+hqSUlvZU0rab6x5EXfGU=
golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200120151820-655fe14d7479/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200217220822-9197077df867/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200327173247-9dae0f8f5775/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4 h1:5/PjkGUjvEU5Gl6BxmvKRPpqo2uNMv4rcHBMwzk/st8=
+golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200728102440-3e129f6d46b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201110211018-35f3e6cf4a65/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201112073958-5cba982894dd h1:5CtCZbICpIOFdgO940moixOPjc0178IU44m4EjOO5IY=
+golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.0.0-20170915090833-1cbadb444a80/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
+golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
+golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.4 h1:0YWbFKbhXG/wIiuHDSKpS0Iy7FSA+u45VtBMfQcFTTc=
+golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20191024005414-555d28b269f0 h1:/5xXl8Y5W96D+TtHSlonuFqGHIWVuyCkGJLwGh9JJFs=
+golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e h1:EHBhcS0mlXEAVwNyO2dLfjToGsyY4j24pTs2ScHnX7s=
+golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20170915040203-e531a2a1c15f/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180525024113-a5b4c53f6e8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -636,12 +990,39 @@ golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBn
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190521203540-521d6ed310dd/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190617190820-da514acc4774/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190624222133-a101b041ded4/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190909030654-5b82db07426d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190920225731-5eefd052ad72/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
+golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200616133436-c1934b75d054/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gonum.org/v1/gonum v0.0.0-20180816165407-929014505bf4/go.mod h1:Y+Yx5eoAFn32cQvJDxZx5Dpnq+c3wtXuadVZAcxbbBo=
gonum.org/v1/gonum v0.0.0-20190331200053-3d26580ed485/go.mod h1:2ltnJ7xHfj0zHS40VVPYEAAMTa3ZGguvHGBSJeRWqE0=
gonum.org/v1/gonum v0.6.2/go.mod h1:9mxDZsDKxgMAuccQkewq682L+0eCu4dCN2yonUJTCLU=
@@ -650,27 +1031,77 @@ gonum.org/v1/netlib v0.0.0-20190331212654-76723241ea4e/go.mod h1:kS+toOQn6AQKjmK
gonum.org/v1/plot v0.0.0-20190515093506-e2840ee46a6b/go.mod h1:Wt8AAjI+ypCyYX3nZBvf6cAIx93T+c/OS2HFAYskSZc=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.6.1-0.20190607001116-5213b8090861/go.mod h1:btoxGiFvQNVUZQ8W08zLtrVS08CNpINPEfxXxgJL1Q4=
+google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
+google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.15.1-0.20200106000736-b8fc810ca6b5/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.15.1/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0 h1:KxkO13IPW4Lslp2bz+KHP2E3gtFlrIGNThxkZQ3g+4c=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
+google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55 h1:gSJIx1SDwno+2ElGhA4+qG2zF97qiUzTM+rQ0klBOcE=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
+google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200117163144-32f20d992d24/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
+google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 h1:+kGHl1aib/qcwaRi1CbqBZ1rk19r85MNUf8HaBghugY=
+google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
+google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a h1:pOwg4OoaRYScjmR4LlLgdtnyoHYTSAVhhqe5uPdpII8=
+google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
+google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
+google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.26.0 h1:2dTRdpdFEEhJYQD8EMLB61nnrzSCTbG38PhqdhvOltg=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.27.0 h1:rRYRFMVgRv6E0D70Skyfsr28tDXIuuPZyWGMPdMcnXg=
+google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.27.1 h1:zvIju4sqAGvwKspUQOhwnpcqSbzi7/H6QomNNjTL4sk=
+google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
+google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
+google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
+google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
+google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
+google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.24.0 h1:UhZDfRO8JRQru4/+LlLE0BRKGF8L+PICnvYZmx/fEGA=
+google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
+google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c=
+google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
@@ -678,6 +1109,7 @@ gopkg.in/gcfg.v1 v1.2.0/go.mod h1:yesOnuUOFQAhST5vPY4nbZsb/huCgGGXlipJsBn0b3o=
gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
+gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/mcuadros/go-syslog.v2 v2.2.1/go.mod h1:l5LPIyOOyIdQquNg+oU6Z3524YwrcqEm0aKH+5zpt2U=
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
@@ -688,57 +1120,141 @@ gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bl
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.1.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools/gotestsum v0.3.5/go.mod h1:Mnf3e5FUzXbkCfynWBGOwLssY7gTQgCHObK9tMpAriY=
+gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
grpc.go4.org v0.0.0-20170609214715-11d0a25b4919/go.mod h1:77eQGdRu53HpSqPFJFmuJdjuHRquDANNeA4x7B8WQ9o=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.2/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
+honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
+honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.18.4 h1:8x49nBRxuXGUlDlwlWd3RMY1SayZrzFfxea3UZSkFw4=
k8s.io/api v0.18.4/go.mod h1:lOIQAKYgai1+vz9J7YcDZwC26Z0zQewYOGWdyIPUUQ4=
+k8s.io/api v0.19.0 h1:XyrFIJqTYZJ2DU7FBE/bSPz7b1HvbVBuBf07oeo6eTc=
+k8s.io/api v0.19.0/go.mod h1:I1K45XlvTrDjmj5LoM5LuP/KYrhWbjUKT/SoPG0qTjw=
+k8s.io/api v0.20.0 h1:WwrYoZNM1W1aQEbyl8HNG+oWGzLpZQBlcerS9BQw9yI=
+k8s.io/api v0.20.0/go.mod h1:HyLC5l5eoS/ygQYl1BXBgFzWNlkHiAuyNAbevIn+FKg=
k8s.io/apiextensions-apiserver v0.18.4/go.mod h1:NYeyeYq4SIpFlPxSAB6jHPIdvu3hL0pc36wuRChybio=
+k8s.io/apiextensions-apiserver v0.19.0/go.mod h1:znfQxNpjqz/ZehvbfMg5N6fvBJW5Lqu5HVLTJQdP4Fs=
+k8s.io/apiextensions-apiserver v0.20.0/go.mod h1:ZH+C33L2Bh1LY1+HphoRmN1IQVLTShVcTojivK3N9xg=
k8s.io/apimachinery v0.18.4 h1:ST2beySjhqwJoIFk6p7Hp5v5O0hYY6Gngq/gUYXTPIA=
k8s.io/apimachinery v0.18.4/go.mod h1:OaXp26zu/5J7p0f92ASynJa1pZo06YlV9fG7BoWbCko=
+k8s.io/apimachinery v0.19.0 h1:gjKnAda/HZp5k4xQYjL0K/Yb66IvNqjthCb03QlKpaQ=
+k8s.io/apimachinery v0.19.0/go.mod h1:DnPGDnARWFvYa3pMHgSxtbZb7gpzzAZ1pTfaUNDVlmA=
+k8s.io/apimachinery v0.20.0 h1:jjzbTJRXk0unNS71L7h3lxGDH/2HPxMPaQY+MjECKL8=
+k8s.io/apimachinery v0.20.0/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
k8s.io/apiserver v0.18.4 h1:pn1jSQkfboPSirZopkVpEdLW4FcQLnYMaIY8LFxxj30=
k8s.io/apiserver v0.18.4/go.mod h1:q+zoFct5ABNnYkGIaGQ3bcbUNdmPyOCoEBcg51LChY8=
+k8s.io/apiserver v0.19.0 h1:jLhrL06wGAADbLUUQm8glSLnAGP6c7y5R3p19grkBoY=
+k8s.io/apiserver v0.19.0/go.mod h1:XvzqavYj73931x7FLtyagh8WibHpePJ1QwWrSJs2CLk=
+k8s.io/apiserver v0.20.0 h1:0MwO4xCoqZwhoLbFyyBSJdu55CScp4V4sAgX6z4oPBY=
+k8s.io/apiserver v0.20.0/go.mod h1:6gRIWiOkvGvQt12WTYmsiYoUyYW0FXSiMdNl4m+sxY8=
k8s.io/cli-runtime v0.18.4/go.mod h1:9/hS/Cuf7NVzWR5F/5tyS6xsnclxoPLVtwhnkJG1Y4g=
+k8s.io/cli-runtime v0.19.0/go.mod h1:tun9l0eUklT8IHIM0jors17KmUjcrAxn0myoBYwuNuo=
+k8s.io/cli-runtime v0.20.0/go.mod h1:C5tewU1SC1t09D7pmkk83FT4lMAw+bvMDuRxA7f0t2s=
k8s.io/client-go v0.18.4 h1:un55V1Q/B3JO3A76eS0kUSywgGK/WR3BQ8fHQjNa6Zc=
k8s.io/client-go v0.18.4/go.mod h1:f5sXwL4yAZRkAtzOxRWUhA/N8XzGCb+nPZI8PfobZ9g=
+k8s.io/client-go v0.19.0 h1:1+0E0zfWFIWeyRhQYWzimJOyAk2UT7TiARaLNwJCf7k=
+k8s.io/client-go v0.19.0/go.mod h1:H9E/VT95blcFQnlyShFgnFT9ZnJOAceiUHM3MlRC+mU=
+k8s.io/client-go v0.20.0 h1:Xlax8PKbZsjX4gFvNtt4F5MoJ1V5prDvCuoq9B7iax0=
+k8s.io/client-go v0.20.0/go.mod h1:4KWh/g+Ocd8KkCwKF8vUNnmqgv+EVnQDK4MBF4oB5tY=
k8s.io/cloud-provider v0.18.4/go.mod h1:JdI6cuSFPSPANEciv0v5qfwztkeyFCVc1S3krLYrw0E=
+k8s.io/cloud-provider v0.19.0 h1:Ae09nHr6BVPEzmAWbZedYC0gjsIPbt7YsIY0V/NHGr0=
+k8s.io/cloud-provider v0.19.0/go.mod h1:TYh7b7kQ6wiqF7Ftb+u3lN4IwvgOPbBrcvC3TDAW4cw=
+k8s.io/cloud-provider v0.20.0 h1:CVPQ66iyfNgeGomUq2jE/TWrfzE77bdCpemhFS8955U=
+k8s.io/cloud-provider v0.20.0/go.mod h1:Lz/luSVD5BrHDDhtVdjFh0C2qQCRYdf0b9BHQ9L+bXc=
k8s.io/cluster-bootstrap v0.18.4/go.mod h1:hNG705ec9SMN2BGlJ81R2CnyJjNKfROtAxvI9JXZdiM=
+k8s.io/cluster-bootstrap v0.19.0/go.mod h1:kBn1DKyqoM245wzz+AAnGkuysJ+9GqVbPYveTo4KiaA=
+k8s.io/cluster-bootstrap v0.20.0/go.mod h1:6WZaNIBvcvL7MkPzSRKrZDIr4u+ePW2oIWoRsEFMjmE=
k8s.io/code-generator v0.18.4/go.mod h1:TgNEVx9hCyPGpdtCWA34olQYLkh3ok9ar7XfSsr8b6c=
+k8s.io/code-generator v0.19.0/go.mod h1:moqLn7w0t9cMs4+5CQyxnfA/HV8MF6aAVENF+WZZhgk=
+k8s.io/code-generator v0.20.0/go.mod h1:UsqdF+VX4PU2g46NC2JRs4gc+IfrctnwHb76RNbWHJg=
k8s.io/component-base v0.18.4 h1:Kr53Fp1iCGNsl9Uv4VcRvLy7YyIqi9oaJOQ7SXtKI98=
k8s.io/component-base v0.18.4/go.mod h1:7jr/Ef5PGmKwQhyAz/pjByxJbC58mhKAhiaDu0vXfPk=
+k8s.io/component-base v0.19.0 h1:OueXf1q3RW7NlLlUCj2Dimwt7E1ys6ZqRnq53l2YuoE=
+k8s.io/component-base v0.19.0/go.mod h1:dKsY8BxkA+9dZIAh2aWJLL/UdASFDNtGYTCItL4LM7Y=
+k8s.io/component-base v0.20.0 h1:BXGL8iitIQD+0NgW49UsM7MraNUUGDU3FBmrfUAtmVQ=
+k8s.io/component-base v0.20.0/go.mod h1:wKPj+RHnAr8LW2EIBIK7AxOHPde4gme2lzXwVSoRXeA=
+k8s.io/component-helpers v0.20.0/go.mod h1:nx6NOtfSfGOxnSZsDJxpGbnsVuUA1UXpwDvZIrtigNk=
+k8s.io/controller-manager v0.20.0/go.mod h1:nD4qym/pmCz2v1tpqvlEBVlHW9CAZwedloM8GrJTLpg=
k8s.io/cri-api v0.18.4/go.mod h1:OJtpjDvfsKoLGhvcc0qfygved0S0dGX56IJzPbqTG1s=
+k8s.io/cri-api v0.19.0/go.mod h1:UN/iU9Ua0iYdDREBXNE9vqCJ7MIh/FW3VIL0d8pw7Fw=
+k8s.io/cri-api v0.20.0/go.mod h1:2JRbKt+BFLTjtrILYVqQK5jqhI+XNdF6UiGMgczeBCI=
k8s.io/csi-translation-lib v0.18.4/go.mod h1:FTci2m8/3oN8E+8OyblBXei8w4mwbiH4boNPeob4piE=
+k8s.io/csi-translation-lib v0.19.0/go.mod h1:zGS1YqV8U2So/t4Hz8SoRXMx5y5/KSKnA6BXXxGuo4A=
+k8s.io/csi-translation-lib v0.20.0/go.mod h1:M4CdD66GxEI6ev8aTtsA2NkK9kIF9K5VZQMcw/SsoLs=
k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/gengo v0.0.0-20200114144118-36b2048a9120/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
+k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
+k8s.io/gengo v0.0.0-20200428234225-8167cfdcfc14/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
+k8s.io/gengo v0.0.0-20201113003025-83324d819ded/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/heapster v1.2.0-beta.1/go.mod h1:h1uhptVXMwC8xtZBYsPXKVi8fpdlYkTs6k949KozGrM=
k8s.io/klog v0.0.0-20181102134211-b9b56d5dfc92/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/klog v0.3.0/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/klog v1.0.0 h1:Pt+yjF5aB1xDSVbau4VsWe+dQNzA0qv1LlXdC2dF6Q8=
k8s.io/klog v1.0.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I=
+k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
+k8s.io/klog/v2 v2.2.0 h1:XRvcwJozkgZ1UQJmfMGpvRthQHOvihEhYtDfAaxMz/A=
+k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
+k8s.io/klog/v2 v2.4.0 h1:7+X0fUguPyrKEC4WjH8iGDg3laWgMo5tMnRTIGTTxGQ=
+k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/kube-aggregator v0.18.4/go.mod h1:xOVy4wqhpivXCt07Diwdms2gonG+SONVx+1e7O+GfC0=
+k8s.io/kube-aggregator v0.19.0/go.mod h1:1Ln45PQggFAG8xOqWPIYMxUq8WNtpPnYsbUJ39DpF/A=
+k8s.io/kube-aggregator v0.20.0/go.mod h1:3Is/gzzWmhhG/rA3CpA1+eVye87lreBQDFGcAGT7gzo=
k8s.io/kube-controller-manager v0.18.4/go.mod h1:GrY1S0F7zA0LQlt0ApOLt4iMpphKTk3mFrQl1+usrfs=
+k8s.io/kube-controller-manager v0.19.0/go.mod h1:uGZyiHK73NxNEN5EZv/Esm3fbCOzeq4ndttMexVZ1L0=
+k8s.io/kube-controller-manager v0.20.0/go.mod h1:Pmli7dnwIVpwKJVeab97yBt35QEFdw65oqT5ti0ikUs=
k8s.io/kube-openapi v0.0.0-20200410145947-61e04a5be9a6/go.mod h1:GRQhZsXIAJ1xR0C9bd8UpWHZ5plfAS9fzPjJuQ6JL3E=
+k8s.io/kube-openapi v0.0.0-20200805222855-6aeccd4b50c6/go.mod h1:UuqjUnNftUyPE5H64/qeyjQoUZhGpeFDVdxjTeEVN2o=
+k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM=
k8s.io/kube-proxy v0.18.4/go.mod h1:h2c+ckQC1XpybDs53mWhLCvvM6txduWVLPQwwvGqR9M=
+k8s.io/kube-proxy v0.19.0/go.mod h1:7NoJCFgsWb7iiMB1F6bW1St5rEXC+ir2aWiJehASmTU=
+k8s.io/kube-proxy v0.20.0/go.mod h1:R97oobM6zSh3ZqFMXi5DzCH/qJXNzua/UzcDmuQRexM=
k8s.io/kube-scheduler v0.18.4/go.mod h1:vRFb/8Yi7hh670beaPrXttMpjt7H8EooDkgwFm8ts4k=
+k8s.io/kube-scheduler v0.19.0/go.mod h1:1XGjJUgstM0/0x8to+bSGSyCs3Dp3dbCEr3Io/mvd4s=
+k8s.io/kube-scheduler v0.20.0/go.mod h1:cRTGsJU3TfQvbMJBmpoPgq9rBF5cQLpLKoOafKwdZnI=
k8s.io/kubectl v0.18.4/go.mod h1:EzB+nfeUWk6fm6giXQ8P4Fayw3dsN+M7Wjy23mTRtB0=
+k8s.io/kubectl v0.19.0/go.mod h1:gPCjjsmE6unJzgaUNXIFGZGafiUp5jh0If3F/x7/rRg=
+k8s.io/kubectl v0.20.0/go.mod h1:8x5GzQkgikz7M2eFGGuu6yOfrenwnw5g4RXOUgbjR1M=
k8s.io/kubelet v0.18.4/go.mod h1:D0V9JYaTJRF+ry+9JfnM4uyg3ySRLQ02XjfQ5f2u4CM=
+k8s.io/kubelet v0.19.0/go.mod h1:cGds22piF/LnFzfAaIT+efvOYBHVYdunqka6NVuNw9g=
+k8s.io/kubelet v0.20.0/go.mod h1:lMdjO1NA+JZXSYtxb48pQmNERmC+vVIXIYkJIugVhl0=
k8s.io/kubernetes v1.18.4 h1:AYtJ24PIT91P1K8ekCrvay8LK8WctWhC5+NI0HZ8sqE=
k8s.io/kubernetes v1.18.4/go.mod h1:Efg82S+Ti02A/Mww53bxroc7IgzX2bgPsf6hT8gAs3M=
+k8s.io/kubernetes v1.19.0 h1:ir53YuXsfsuVABmtYHCTUa3xjD41Htxv3o+xoQjJdUo=
+k8s.io/kubernetes v1.19.0/go.mod h1:yhT1/ltQajQsha3tnYc9QPFYSumGM45nlZdjf7WqE1A=
+k8s.io/kubernetes v1.20.0 h1:mnc69esJC3PJgSptxNJomGz2gBthyGLSEy18WiyRH4U=
+k8s.io/kubernetes v1.20.0/go.mod h1:/xrHGNfoQphtkhZvyd5bA1lRmz+QkDVmBZu+O8QMoek=
+k8s.io/kubernetes v1.20.2 h1:EsQROw+yFsDMfjEHp52cKs4JVI6lAHA2SHGAF88cK7s=
k8s.io/legacy-cloud-providers v0.18.4/go.mod h1:Mnxtra7DxVrODfGZHPsrkLi22lwmZOlWkjyyO3vW+WM=
+k8s.io/legacy-cloud-providers v0.19.0/go.mod h1:Q5czDCPnStdpFohMpcbnqL+MLR75kUhIDIsnmwEm0/o=
+k8s.io/legacy-cloud-providers v0.20.0/go.mod h1:1jEkaU7h9+b1EYdfWDBvhFAr+QpRfUjQfK+dGhxPGfA=
k8s.io/metrics v0.18.4/go.mod h1:luze4fyI9JG4eLDZy0kFdYEebqNfi0QrG4xNEbPkHOs=
+k8s.io/metrics v0.19.0/go.mod h1:WykpW8B60OeAJx1imdwUgyOID2kDljr/Q+1zrPJ98Wo=
+k8s.io/metrics v0.20.0/go.mod h1:9yiRhfr8K8sjdj2EthQQE9WvpYDvsXIV3CjN4Ruq4Jw=
+k8s.io/mount-utils v0.20.0/go.mod h1:Jv9NRZ5L2LF87A17GaGlArD+r3JAJdZFvo4XD1cG4Kc=
k8s.io/repo-infra v0.0.1-alpha.1/go.mod h1:wO1t9WaB99V80ljbeENTnayuEEwNZt7gECYh/CEyOJ8=
k8s.io/sample-apiserver v0.18.4/go.mod h1:j5XH5FUmMd/ztoz+9ch0+hL+lsvWdgxnTV7l3P3Ijoo=
+k8s.io/sample-apiserver v0.19.0/go.mod h1:Bq9UulNoKnT72JqlkWF2JS14cXxJqcmvLtb5+EcwiNA=
+k8s.io/sample-apiserver v0.20.0/go.mod h1:tScvbz/BcUG46IOsu2YLt4EjBP7XeUuMzMbQt2tQYWw=
k8s.io/system-validators v1.0.4/go.mod h1:HgSgTg4NAGNoYYjKsUyk52gdNi2PVDswQ9Iyn66R7NI=
+k8s.io/system-validators v1.1.2/go.mod h1:bPldcLgkIUK22ALflnsXk8pvkTEndYdNuaHH6gRrl0Q=
+k8s.io/system-validators v1.2.0/go.mod h1:bPldcLgkIUK22ALflnsXk8pvkTEndYdNuaHH6gRrl0Q=
k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89 h1:d4vVOjXm687F1iLSP2q3lyPPuyvTUt3aVoBpi2DqRsU=
k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew=
+k8s.io/utils v0.0.0-20200414100711-2df71ebbae66/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
+k8s.io/utils v0.0.0-20200729134348-d5654de09c73 h1:uJmqzgNWG7XyClnU/mLPBWwfKKF1K8Hf8whTseBgJcg=
+k8s.io/utils v0.0.0-20200729134348-d5654de09c73/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
+k8s.io/utils v0.0.0-20201110183641-67b214c5f920 h1:CbnUZsM497iRC5QMVkHwyl8s2tB3g7yaSHkYPkpgelw=
+k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
modernc.org/cc v1.0.0/go.mod h1:1Sk4//wdnYJiUIxnW8ddKpaOJCF37yAdqYnkxUpaYxw=
modernc.org/golex v1.0.0/go.mod h1:b/QX9oBD/LhixY6NDh+IdGv17hgB+51fET1i2kPSmvk=
modernc.org/mathutil v1.0.0/go.mod h1:wU0vUrJsVWBZ4P6e7xtFJEhFSNsfRLJ8H458uRjg03k=
@@ -747,13 +1263,24 @@ modernc.org/xc v1.0.0/go.mod h1:mRNCo0bvLjGhHO9WsyuKVU4q0ceiDDDoEeWDJHrNx8I=
mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed/go.mod h1:Xkxe497xwlCKkIaQYRfC7CSLworTXY9RMqwhhCm+8Nc=
mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b/go.mod h1:2odslEg/xrtNQqCYg2/jCoyKnw3vv5biOc3JnIcYfL4=
mvdan.cc/unparam v0.0.0-20190209190245-fbb59629db34/go.mod h1:H6SUd1XjIs+qQCyskXg5OFSrilMRUkD8ePJpHKDPaeY=
+rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
+rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
+rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.7 h1:uuHDyjllyzRyCIvvn0OBjiRB0SgBZGqHNYAmjR7fO50=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.7/go.mod h1:PHgbrJT7lCHcxMU+mDHEm+nx46H4zuuHZkDP6icnhu0=
+sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.9 h1:rusRLrDhjBp6aYtl9sGEvQJr6faoHoDLd0YcUBTZguI=
+sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.9/go.mod h1:dzAXnQbTRyDlZPJX2SUPEqvnB+j7AJjtlox7PEwigU0=
+sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.14 h1:TihvEz9MPj2u0KWds6E2OBUXfwaL4qRJ33c7HGiJpqk=
+sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.14/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
sigs.k8s.io/kustomize v2.0.3+incompatible/go.mod h1:MkjgH3RdOWrievjo6c9T245dYlB5QeXV4WCbnt/PEpU=
sigs.k8s.io/structured-merge-diff/v3 v3.0.0-20200116222232-67a7b8c61874/go.mod h1:PlARxl6Hbt/+BC80dRLi1qAmnMqwqDg62YvvVkZjemw=
sigs.k8s.io/structured-merge-diff/v3 v3.0.0 h1:dOmIZBMfhcHS09XZkMyUgkq5trg3/jRyJYFZUiaOp8E=
sigs.k8s.io/structured-merge-diff/v3 v3.0.0/go.mod h1:PlARxl6Hbt/+BC80dRLi1qAmnMqwqDg62YvvVkZjemw=
+sigs.k8s.io/structured-merge-diff/v4 v4.0.1 h1:YXTMot5Qz/X1iBRJhAt+vI+HVttY0WkSqqhKxQ0xVbA=
+sigs.k8s.io/structured-merge-diff/v4 v4.0.1/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
+sigs.k8s.io/structured-merge-diff/v4 v4.0.2 h1:YHQV7Dajm86OuqnIR6zAelnDWBRjo+YhYV9PmGrh1s8=
+sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sigs.k8s.io/yaml v1.2.0 h1:kr/MCeFWJWTwyaHoR9c8EjH9OumOmoF9YGiZd7lFm/Q=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
diff --git a/static/_redirects b/static/_redirects
index 11f4be7a8553c..fd7eba2713599 100644
--- a/static/_redirects
+++ b/static/_redirects
@@ -203,6 +203,8 @@
/docs/reference/kubernetes-api/api-index/ /docs/reference 301
+/docs/reference/kubernetes-api/labels-annotations-taints/ /docs/reference/labels-annotations-taints/ 301
+
/docs/reporting-security-issues/ /security/ 301
/docs/roadmap/ https://github.com/kubernetes/kubernetes/milestones/ 301
@@ -464,6 +466,7 @@
/docs/admin/authorization/ /docs/reference/access-authn-authz/authorization/ 301
/docs/admin/high-availability/building/ /docs/setup/production-environment/tools/kubeadm/high-availability/ 301
/code-of-conduct/ /community/code-of-conduct/ 301
+/values/ /community/values/ 302
/docs/setup/version-skew-policy/ /docs/setup/release/version-skew-policy/ 301