Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Official 1.14 Release Docs #13174

Merged
merged 50 commits into from
Mar 25, 2019
Merged
Changes from 18 commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
4652684
Official documentation on Poseidon/Firmament, a new multi-scheduler s…
Dec 23, 2018
cefff92
Document timeout attribute for kms-plugin. (#12158)
immutableT Jan 23, 2019
de2e67e
Official documentation on Poseidon/Firmament, a new multi-scheduler …
Jan 29, 2019
b822184
Remove initializers from doc. It will be removed in 1.14 (#12331)
caesarxuchao Jan 29, 2019
e528300
kubeadm: Document CRI auto detection functionality (#12462)
rosti Feb 8, 2019
ce380cc
Resolved merge conflict removing initializers
jimangel Feb 11, 2019
df1b59b
Minor doc change for GAing Pod DNS Config (#12514)
MrHohn Feb 12, 2019
eb5aaa7
Graduate ExpandInUsePersistentVolumes feature to beta (#10574)
mlmhl Feb 13, 2019
1588645
Rename 2018-11-07-grpc-load-balancing-with-linkerd.md.md file (#12594)
makoscafee Feb 13, 2019
48fd1e5
Add dynamic percentage of node scoring to user docs (#12235)
bsalamat Feb 15, 2019
d22320f
delete special symbol (#12445)
hyponet Feb 17, 2019
582995a
Update documentation for VolumeSubpathEnvExpansion (#11843)
Feb 20, 2019
16b551c
Graduate Pod Priority and Preemption to GA (#12428)
bsalamat Feb 20, 2019
99d3d86
Added Instana links to the documentation (#12977)
noctarius Mar 7, 2019
9742867
Update kubectl plugins to stable (#12847)
soltysh Mar 11, 2019
5f049ec
documentation for CSI topology beta (#12889)
msau42 Mar 11, 2019
98b449d
Document changes to default RBAC discovery ClusterRole(Binding)s (#12…
dekkagaijin Mar 12, 2019
ead0a28
CSI raw block to beta (#12931)
bswartz Mar 12, 2019
b37e645
Change incorrect string raw to block (#12926)
bswartz Mar 15, 2019
ac99ed4
Update documentation on node OS/arch labels (#12976)
yujuhong Mar 15, 2019
f7aa166
local pv GA doc updates (#12915)
msau42 Mar 15, 2019
f18d212
Publish CRD OpenAPI Documentation (#12910)
roycaihw Mar 15, 2019
90d53c2
kubeadm: add document for upgrading from 1.13 to 1.14 (single CP and …
neolit123 Mar 15, 2019
ed5f459
fix bullet indentation (#13214)
roycaihw Mar 15, 2019
6e49749
mark PodReadinessGate GA (#12800)
freehan Mar 16, 2019
cc769cb
Update RuntimeClass documentation for beta (#13043)
tallclair Mar 16, 2019
ee19771
CSI ephemeral volume alpha documentation (#10934)
vladimirvivien Mar 16, 2019
092e288
update kubectl documentation (#12867)
Liujingfang1 Mar 16, 2019
07c4eb3
Documentation for Windows GMSA feature (#12936)
ddebroy Mar 16, 2019
21d60d1
HugePages graduated to GA (#13004)
derekwaynecarr Mar 16, 2019
b36d68a
Docs for node PID limiting (https://github.com/kubernetes/kubernetes/…
RobertKrawitz Mar 16, 2019
c037ab5
kubeadm: update the reference documentation for 1.14 (#12911)
neolit123 Mar 16, 2019
f50c664
kubeadm: update the 1.14 HA guide (#13191)
neolit123 Mar 16, 2019
61372fe
resolve conflicts for master
jimangel Mar 16, 2019
a0b5acd
fixed a few missed merge conflicts
jimangel Mar 16, 2019
92fd5d4
Admission Webhook new features doc (#12938)
mbohlool Mar 18, 2019
3bf2d15
Clarifications and fixes in GMSA doc (#13226)
ddebroy Mar 18, 2019
e15667a
RunAsGroup documentation for Progressing this to Beta (#12297)
krmayankk Mar 18, 2019
655aed9
start serverside-apply documentation (#13077)
kwiesmueller Mar 18, 2019
965a801
Document CSI update (#12928)
gnufied Mar 19, 2019
cb0b9d0
Overall docs for CSI Migration feature (#12935)
ddebroy Mar 19, 2019
f1ffe72
Windows documentation updates for 1.14 (#12929)
craiglpeters Mar 19, 2019
94c455a
add section on upgrading CoreDNS (#12909)
rajansandeep Mar 19, 2019
30915de
documentation for kubelet resource metrics endpoint (#12934)
dashpole Mar 20, 2019
8f68521
windows docs updates for 1.14 (#13279)
michmike Mar 20, 2019
ae5d409
update to windows docs for 1.14 (#13322)
michmike Mar 22, 2019
74319b6
Update intro-windows-in-kubernetes.md (#13344)
michmike Mar 23, 2019
f902f7d
server side apply followup (#13321)
kwiesmueller Mar 23, 2019
87c1d6a
resolving conflicts
jimangel Mar 23, 2019
3459d02
Update config.toml (#13365)
jimangel Mar 25, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 9 additions & 22 deletions content/en/docs/concepts/configuration/pod-priority-preemption.md
Original file line number Diff line number Diff line change
@@ -9,7 +9,7 @@ weight: 70

{{% capture overview %}}

{{< feature-state for_k8s_version="1.11" state="beta" >}}
{{< feature-state for_k8s_version="1.14" state="stable" >}}

[Pods](/docs/user-guide/pods) can have _priority_. Priority indicates the
importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the
@@ -19,8 +19,8 @@ pending Pod possible.
In Kubernetes 1.9 and later, Priority also affects scheduling order of Pods and
out-of-resource eviction ordering on the Node.

Pod priority and preemption are moved to beta since Kubernetes 1.11 and are
enabled by default in this release and later.
Pod priority and preemption graduated to beta in Kubernetes 1.11 and to GA in
Kubernetes 1.14. They have been enabled by default since 1.11.

In Kubernetes versions where Pod priority and preemption is still an alpha-level
feature, you need to explicitly enable it. To use these features in the older
@@ -34,6 +34,7 @@ Kubernetes Version | Priority and Preemption State | Enabled by default
1.9 | alpha | no
1.10 | alpha | no
1.11 | beta | yes
1.14 | GA | yes

{{< warning >}}In a cluster where not all users are trusted, a
malicious user could create pods at the highest possible priorities, causing
@@ -71,15 +72,15 @@ Pods.
## How to disable preemption

{{< note >}}
In Kubernetes 1.11, critical pods (except DaemonSet pods, which are
still scheduled by the DaemonSet controller) rely on scheduler preemption to be
scheduled when a cluster is under resource pressure. For this reason, you will
need to run an older version of Rescheduler if you decide to disable preemption.
More on this is provided below.
In Kubernetes 1.12+, critical pods rely on scheduler preemption to be scheduled
when a cluster is under resource pressure. For this reason, it is not
recommended to disable preemption.
{{< /note >}}

In Kubernetes 1.11 and later, preemption is controlled by a kube-scheduler flag
`disablePreemption`, which is set to `false` by default.
If you want to disable preemption despite the above note, you can set
`disablePreemption` to `true`.

This option is available in component configs only and is not available in
old-style command line options. Below is a sample component config to disable
@@ -96,20 +97,6 @@ algorithmSource:
disablePreemption: true
```
### Start an older version of Rescheduler in the cluster
When priority or preemption is disabled, we must run Rescheduler v0.3.1 (instead
of v0.4.0) to ensure that critical Pods are scheduled when nodes or cluster are
under resource pressure. Since critical Pod annotation is still supported in
this release, running Rescheduler should be enough and no other changes to the
configuration of Pods should be needed.
Rescheduler images can be found at:
[gcr.io/k8s-image-staging/rescheduler](http://gcr.io/k8s-image-staging/rescheduler).
In the code, changing the Rescheduler version back to v.0.3.1 is the reverse of
[this PR](https://github.com/kubernetes/kubernetes/pull/65454).
## PriorityClass
A PriorityClass is a non-namespaced object that defines a mapping from a
70 changes: 37 additions & 33 deletions content/en/docs/concepts/configuration/scheduler-perf-tuning.md
Original file line number Diff line number Diff line change
@@ -8,31 +8,39 @@ weight: 70

{{% capture overview %}}

{{< feature-state for_k8s_version="1.12" >}}
{{< feature-state for_k8s_version="1.14" state="beta" >}}

Kube-scheduler is the Kubernetes default scheduler. It is responsible for
placement of Pods on Nodes in a cluster. Nodes in a cluster that meet the
scheduling requirements of a Pod are called "feasible" Nodes for the Pod. The
scheduler finds feasible Nodes for a Pod and then runs a set of functions to
score the feasible Nodes and picks a Node with the highest score among the
feasible ones to run the Pod. The scheduler then notifies the API server about this
decision in a process called "Binding".
feasible ones to run the Pod. The scheduler then notifies the API server about
this decision in a process called "Binding".

{{% /capture %}}

{{% capture body %}}

## Percentage of Nodes to Score

Before Kubernetes 1.12, Kube-scheduler used to check the feasibility of all the
nodes in a cluster and then scored the feasible ones. Kubernetes 1.12 has a new
feature that allows the scheduler to stop looking for more feasible nodes once
it finds a certain number of them. This improves the scheduler's performance in
large clusters. The number is specified as a percentage of the cluster size and
is controlled by a configuration option called `percentageOfNodesToScore`. The
range should be between 1 and 100. Other values are considered as 100%. The
default value of this option is 50%. A cluster administrator can change this value by providing a
different value in the scheduler configuration. However, it may not be necessary to change this value.
Before Kubernetes 1.12, Kube-scheduler used to check the feasibility of all
nodes in a cluster and then scored the feasible ones. Kubernetes 1.12 added a
new feature that allows the scheduler to stop looking for more feasible nodes
once it finds a certain number of them. This improves the scheduler's
performance in large clusters. The number is specified as a percentage of the
cluster size. The percentage can be controlled by a configuration option called
`percentageOfNodesToScore`. The range should be between 1 and 100. Larger values
are considered as 100%. Zero is equivalent to not providing the config option.
Kubernetes 1.14 has logic to find the percentage of nodes to score based on the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about starting a new paragraph here?

size of the cluster if it is not specified in the configuration. It uses a
linear formula which yields 50% for a 100-node cluster. The formula yields 10%
for a 5000-node cluster. The lower bound for the automatic value is 5%. In other
words, the scheduler always scores at least 5% of the cluster no matter how
large the cluster is, unless the user provides the config option with a value
smaller than 5.

Below is an example configuration that sets `percentageOfNodesToScore` to 50%.

```yaml
apiVersion: componentconfig/v1alpha1
@@ -45,41 +53,37 @@ algorithmSource:
percentageOfNodesToScore: 50
```
{{< note >}}
In clusters with zero or less than 50 feasible nodes, the
scheduler still checks all the nodes, simply because there are not enough
feasible nodes to stop the scheduler's search early.
{{< /note >}}
{{< note >}} In clusters with less than 50 feasible nodes, the scheduler still
checks all the nodes, simply because there are not enough feasible nodes to stop
the scheduler's search early. {{< /note >}}
**To disable this feature**, you can set `percentageOfNodesToScore` to 100.

### Tuning percentageOfNodesToScore

`percentageOfNodesToScore` must be a value between 1 and 100
with the default value of 50. There is also a hardcoded minimum value of 50
nodes which is applied internally. The scheduler tries to find at
least 50 nodes regardless of the value of `percentageOfNodesToScore`. This means
that changing this option to lower values in clusters with several hundred nodes
will not have much impact on the number of feasible nodes that the scheduler
tries to find. This is intentional as this option is unlikely to improve
performance noticeably in smaller clusters. In large clusters with over a 1000
nodes setting this value to lower numbers may show a noticeable performance
improvement.
`percentageOfNodesToScore` must be a value between 1 and 100 with the default
value being calculated based on the cluster size. There is also a hardcoded
minimum value of 50 nodes. This means that changing
this option to lower values in clusters with several hundred nodes will not have
much impact on the number of feasible nodes that the scheduler tries to find.
This is intentional as this option is unlikely to improve performance noticeably
in smaller clusters. In large clusters with over a 1000 nodes setting this value
to lower numbers may show a noticeable performance improvement.

An important note to consider when setting this value is that when a smaller
number of nodes in a cluster are checked for feasibility, some nodes are not
sent to be scored for a given Pod. As a result, a Node which could possibly
score a higher value for running the given Pod might not even be passed to the
scoring phase. This would result in a less than ideal placement of the Pod. For
this reason, the value should not be set to very low percentages. A general rule
of thumb is to never set the value to anything lower than 30. Lower values
of thumb is to never set the value to anything lower than 10. Lower values
should be used only when the scheduler's throughput is critical for your
application and the score of nodes is not important. In other words, you prefer
to run the Pod on any Node as long as it is feasible.

It is not recommended to lower this value from its default if your cluster has
only several hundred Nodes. It is unlikely to improve the scheduler's
performance significantly.
If your cluster has several hundred Nodes or fewer, we do not recommend lowering
the default value of this configuration option. It is unlikely to improve the
scheduler's performance significantly.

### How the scheduler iterates over Nodes

@@ -91,8 +95,8 @@ for running Pods, the scheduler iterates over the nodes in a round robin
fashion. You can imagine that Nodes are in an array. The scheduler starts from
the start of the array and checks feasibility of the nodes until it finds enough
Nodes as specified by `percentageOfNodesToScore`. For the next Pod, the
scheduler continues from the point in the Node array that it stopped at when checking
feasibility of Nodes for the previous Pod.
scheduler continues from the point in the Node array that it stopped at when
checking feasibility of Nodes for the previous Pod.

If Nodes are in multiple zones, the scheduler iterates over Nodes in various
zones to ensure that Nodes from different zones are considered in the
26 changes: 15 additions & 11 deletions content/en/docs/concepts/services-networking/dns-pod-service.md
Original file line number Diff line number Diff line change
@@ -170,10 +170,10 @@ following pod-specific DNS policies. These policies are specified in the
for details on how DNS queries are handled in those cases.
- "`ClusterFirstWithHostNet`": For Pods running with hostNetwork, you should
explicitly set its DNS policy "`ClusterFirstWithHostNet`".
- "`None`": A new option value introduced in Kubernetes v1.9 (Beta in v1.10). It
allows a Pod to ignore DNS settings from the Kubernetes environment. All DNS
settings are supposed to be provided using the `dnsConfig` field in the Pod Spec.
See [DNS config](#dns-config) subsection below.
- "`None`": It allows a Pod to ignore DNS settings from the Kubernetes
environment. All DNS settings are supposed to be provided using the
`dnsConfig` field in the Pod Spec.
See [Pod's DNS config](#pod-s-dns-config) subsection below.

{{< note >}}
"Default" is not the default DNS policy. If `dnsPolicy` is not
@@ -205,13 +205,7 @@ spec:

### Pod's DNS Config

Kubernetes v1.9 introduces an Alpha feature (Beta in v1.10) that allows users more
control on the DNS settings for a Pod. This feature is enabled by default in v1.10.
To enable this feature in v1.9, the cluster administrator
needs to enable the `CustomPodDNS` feature gate on the apiserver and the kubelet,
for example, "`--feature-gates=CustomPodDNS=true,...`".
When the feature gate is enabled, users can set the `dnsPolicy` field of a Pod
to "`None`" and they can add a new field `dnsConfig` to a Pod Spec.
Pod's DNS Config allows users more control on the DNS settings for a Pod.

The `dnsConfig` field is optional and it can work with any `dnsPolicy` settings.
However, when a Pod's `dnsPolicy` is set to "`None`", the `dnsConfig` field has
@@ -257,6 +251,16 @@ search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
```

### Feature availability

The availability of Pod DNS Config and DNS Policy "`None`"" is shown as below.

| k8s version | Feature support |
| :---------: |:-----------:|
| 1.14 | Stable |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The feature states earlier in this PR were lower case, I wonder if we should copy that here?

| 1.10 | Beta (on by default)|
| 1.9 | Alpha |

{{% /capture %}}

{{% capture whatsnext %}}
5 changes: 5 additions & 0 deletions content/en/docs/concepts/storage/storage-classes.md
Original file line number Diff line number Diff line change
@@ -151,6 +151,11 @@ The following plugins support `WaitForFirstConsumer` with pre-created Persistent
* All of the above
* [Local](#local)

{{< feature-state state="beta" for_k8s_version="1.14" >}}
[CSI volumes](/docs/concepts/storage/volumes/#csi) are also supported with dynamic provisioning
and pre-created PVs, but you'll need to look at the documentation for a specific CSI driver
to see its supported topology keys and examples. The `CSINodeInfo` feature gate must be enabled.

### Allowed Topologies

When a cluster operator specifies the `WaitForFirstConsumer` volume binding mode, it is no longer necessary
21 changes: 9 additions & 12 deletions content/en/docs/concepts/storage/volumes.md
Original file line number Diff line number Diff line change
@@ -1072,13 +1072,14 @@ spec:

### Using subPath with expanded environment variables

{{< feature-state for_k8s_version="v1.11" state="alpha" >}}
{{< feature-state for_k8s_version="v1.14" state="alpha" >}}


`subPath` directory names can also be constructed from Downward API environment variables.
Use the `subPathExpr` field to construct `subPath` directory names from Downward API environment variables.
Before you use this feature, you must enable the `VolumeSubpathEnvExpansion` feature gate.
The `subPath` and `subPathExpr` properties are mutually exclusive.

In this example, a Pod uses `subPath` to create a directory `pod1` within the hostPath volume `/var/log/pods`, using the pod name from the Downward API. The host directory `/var/log/pods/pod1` is mounted at `/logs` in the container.
In this example, a Pod uses `subPathExpr` to create a directory `pod1` within the hostPath volume `/var/log/pods`, using the pod name from the Downward API. The host directory `/var/log/pods/pod1` is mounted at `/logs` in the container.

```yaml
apiVersion: v1
@@ -1099,7 +1100,7 @@ spec:
volumeMounts:
- name: workdir1
mountPath: /logs
subPath: $(POD_NAME)
subPathExpr: $(POD_NAME)
restartPolicy: Never
volumes:
- name: workdir1
@@ -1216,20 +1217,16 @@ persistent volume:

#### CSI raw block volume support

{{< feature-state for_k8s_version="v1.11" state="alpha" >}}
{{< feature-state for_k8s_version="v1.14" state="beta" >}}

Starting with version 1.11, CSI introduced support for raw block volumes, which
relies on the raw block volume feature that was introduced in a previous version of
Kubernetes. This feature will make it possible for vendors with external CSI drivers to
implement raw block volumes support in Kubernetes workloads.

CSI block volume support is feature-gated and turned off by default. To run CSI with
block volume support enabled, a cluster administrator must enable the feature for each
Kubernetes component using the following feature gate flags:

```
--feature-gates=BlockVolume=true,CSIBlockVolume=true
```
CSI block volume support is feature-gated, but enabled by default. The two
feature gates which must be enabled for this feature are `BlockVolume` and
`CSIBlockVolume`.

Learn how to
[setup your PV/PVC with raw block volume support](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support).
Original file line number Diff line number Diff line change
@@ -21,7 +21,7 @@ Some typical uses of a DaemonSet are:
- running a cluster storage daemon, such as `glusterd`, `ceph`, on each node.
- running a logs collection daemon on every node, such as `fluentd` or `logstash`.
- running a node monitoring daemon on every node, such as [Prometheus Node Exporter](
https://github.com/prometheus/node_exporter), `collectd`, [Dynatrace OneAgent](https://www.dynatrace.com/technologies/kubernetes-monitoring/), [AppDynamics Agent](https://docs.appdynamics.com/display/CLOUD/Container+Visibility+with+Kubernetes), Datadog agent, New Relic agent, Ganglia `gmond` or Instana agent.
https://github.com/prometheus/node_exporter), `collectd`, [Dynatrace OneAgent](https://www.dynatrace.com/technologies/kubernetes-monitoring/), [AppDynamics Agent](https://docs.appdynamics.com/display/CLOUD/Container+Visibility+with+Kubernetes), Datadog agent, New Relic agent, Ganglia `gmond`, or [Instana Agent](https://www.instana.com/supported-integrations/kubernetes-monitoring/).

In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon.
A more complex setup might use multiple DaemonSets for a single type of daemon, but with
Original file line number Diff line number Diff line change
@@ -335,13 +335,6 @@ Examples of information you might put here are:

In any case, the annotations are provided by the user and are not validated by Kubernetes in any way. In the future, if an annotation is determined to be widely useful, it may be promoted to a named field of ImageReviewSpec.

### Initializers (alpha) {#initializers}

The admission controller determines the initializers of a resource based on the existing
`InitializerConfiguration`s. It sets the pending initializers by modifying the
metadata of the resource to be created.
For more information, please check [Dynamic Admission Control](/docs/reference/access-authn-authz/extensible-admission-controllers/).

### LimitPodHardAntiAffinityTopology {#limitpodhardantiaffinitytopology}

This admission controller denies any pod that defines `AntiAffinity` topology key other than
Original file line number Diff line number Diff line change
@@ -5,6 +5,7 @@ reviewers:
- whitlockjc
- caesarxuchao
- deads2k
- liggitt
title: Dynamic Admission Control
content_template: templates/concept
weight: 40
@@ -19,16 +20,14 @@ the following:
* They need to be compiled into kube-apiserver.
* They are only configurable when the apiserver starts up.

Two features, *Admission Webhooks* (beta in 1.9) and *Initializers* (alpha),
address these limitations. They allow admission controllers to be developed
out-of-tree and configured at runtime.
*Admission Webhooks* (beta in 1.9) addresses these limitations. It allows
admission controllers to be developed out-of-tree and configured at runtime.

This page describes how to use Admission Webhooks.

This page describes how to use Admission Webhooks and Initializers.
{{% /capture %}}

{{% capture body %}}
## Admission Webhooks

### What are admission webhooks?

Admission webhooks are HTTP callbacks that receive admission requests and do
@@ -196,116 +195,4 @@ users:
```
Of course you need to set up the webhook server to handle these authentications.
## Initializers
### What are initializers?
*Initializer* has two meanings:
* A list of pending pre-initialization tasks, stored in every object's metadata
(e.g., "AddMyCorporatePolicySidecar").
* A user customized controller, which actually performs those tasks. The name of the task
corresponds to the controller which performs the task. For clarity, we call
them *initializer controllers* in this page.
Once the controller has performed its assigned task, it removes its name from
the list. For example, it may send a PATCH that inserts a container in a pod and
also removes its name from `metadata.initializers.pending`. Initializers may make
mutations to objects.

Objects which have a non-empty initializer list are considered uninitialized,
and are not visible in the API unless specifically requested by using the query parameter,
`?includeUninitialized=true`.

### When to use initializers?

Initializers are useful for admins to force policies (e.g., the
[AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)
admission controller), or to inject defaults (e.g., the
[DefaultStorageClass](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
admission controller), etc.

{{< note >}}
If your use case does not involve mutating objects, consider using
external admission webhooks, as they have better performance.
{{< /note >}}

### How are initializers triggered?

When an object is POSTed, it is checked against all existing
`initializerConfiguration` objects (explained below). For all that it matches,
all `spec.initializers[].name`s are appended to the new object's
`metadata.initializers.pending` field.

An initializer controller should list and watch for uninitialized objects, by
using the query parameter `?includeUninitialized=true`. If using client-go, just
set
[listOptions.includeUninitialized](https://github.com/kubernetes/kubernetes/blob/v1.13.0/staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/types.go#L332)
to true.

For the observed uninitialized objects, an initializer controller should first
check if its name matches `metadata.initializers.pending[0]`. If so, it should then
perform its assigned task and remove its name from the list.

### Enable initializers alpha feature

*Initializers* is an alpha feature, so it is disabled by default. To turn it on,
you need to:

* Include "Initializers" in the `--enable-admission-plugins` flag when starting
`kube-apiserver`. If you have multiple `kube-apiserver` replicas, all should
have the same flag setting.

* Enable the dynamic admission controller registration API by adding
`admissionregistration.k8s.io/v1alpha1` to the `--runtime-config` flag passed
to `kube-apiserver`, e.g.
`--runtime-config=admissionregistration.k8s.io/v1alpha1`. Again, all replicas
should have the same flag setting.

### Deploy an initializer controller

You should deploy an initializer controller via the [deployment
API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#deployment-v1beta1-apps).

### Configure initializers on the fly

You can configure what initializers are enabled and what resources are subject
to the initializers by creating `initializerConfiguration` resources.

You should first deploy the initializer controller and make sure that it is
working properly before creating the `initializerConfiguration`. Otherwise, any
newly created resources will be stuck in an uninitialized state.

The following is an example `initializerConfiguration`:

```yaml
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: InitializerConfiguration
metadata:
name: example-config
initializers:
# the name needs to be fully qualified, i.e., containing at least two "."
- name: podimage.example.com
rules:
# apiGroups, apiVersion, resources all support wildcard "*".
# "*" cannot be mixed with non-wildcard.
- apiGroups:
- ""
apiVersions:
- v1
resources:
- pods
```

After you create the `initializerConfiguration`, the system will take a few
seconds to honor the new configuration. Then, `"podimage.example.com"` will be
appended to the `metadata.initializers.pending` field of newly created pods. You
should already have a ready "podimage" initializer controller that handles pods
whose `metadata.initializers.pending[0].name="podimage.example.com"`. Otherwise
the pods will be stuck in an uninitialized state.

Make sure that all expansions of the `<apiGroup, apiVersions, resources>` tuple
in a `rule` are valid. If they are not, separate them in different `rules`.
{{% /capture %}}
11 changes: 8 additions & 3 deletions content/en/docs/reference/access-authn-authz/rbac.md
Original file line number Diff line number Diff line change
@@ -471,13 +471,18 @@ NOTE: editing the role is not recommended as changes will be overwritten on API
</tr>
<tr>
<td><b>system:basic-user</b></td>
<td><b>system:authenticated</b> and <b>system:unauthenticated</b> groups</td>
<td>Allows a user read-only access to basic information about themselves.</td>
<td><b>system:authenticated</b> group</td>
<td>Allows a user read-only access to basic information about themselves. Prior to 1.14, this role was also bound to `system:unauthenticated` by default.</td>
</tr>
<tr>
<td><b>system:discovery</b></td>
<td><b>system:authenticated</b> group</td>
<td>Allows read-only access to API discovery endpoints needed to discover and negotiate an API level. Prior to 1.14, this role was also bound to `system:unauthenticated` by default.</td>
</tr>
<tr>
<td><b>system:public-info-viewer</b></td>
<td><b>system:authenticated</b> and <b>system:unauthenticated</b> groups</td>
<td>Allows read-only access to API discovery endpoints needed to discover and negotiate an API level.</td>
<td>Allows read-only access to non-sensitive information about the cluster. Introduced in 1.14.</td>
</tr>
</table>

Original file line number Diff line number Diff line change
@@ -55,9 +55,12 @@ different Kubernetes components.
| `CPUManager` | `true` | Beta | 1.10 | |
| `CRIContainerLogRotation` | `false` | Alpha | 1.10 | 1.10 |
| `CRIContainerLogRotation` | `true` | Beta| 1.11 | |
| `CSIBlockVolume` | `false` | Alpha | 1.11 | |
| `CSIDriverRegistry` | `false` | Alpha | 1.12 | |
| `CSINodeInfo` | `false` | Alpha | 1.12 | |
| `CSIBlockVolume` | `false` | Alpha | 1.11 | 1.13 |
| `CSIBlockVolume` | `true` | Beta | 1.14 | |
| `CSIDriverRegistry` | `false` | Alpha | 1.12 | 1.13 |
| `CSIDriverRegistry` | `true` | Beta | 1.14 | |
| `CSINodeInfo` | `false` | Alpha | 1.12 | 1.13 |
| `CSINodeInfo` | `true` | Beta | 1.14 | |
| `CSIPersistentVolume` | `false` | Alpha | 1.9 | 1.9 |
| `CSIPersistentVolume` | `true` | Beta | 1.10 | 1.12 |
| `CSIPersistentVolume` | `true` | GA | 1.13 | - |
@@ -79,7 +82,8 @@ different Kubernetes components.
| `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 |
| `DynamicVolumeProvisioning` | `true` | GA | 1.8 | |
| `EnableEquivalenceClassCache` | `false` | Alpha | 1.8 | |
| `ExpandInUsePersistentVolumes` | `false` | Alpha | 1.11 | |
| `ExpandInUsePersistentVolumes` | `false` | Alpha | 1.11 | 1.13 | |
| `ExpandInUsePersistentVolumes` | `true` | Beta | 1.14 | |
| `ExpandPersistentVolumes` | `false` | Alpha | 1.8 | 1.10 |
| `ExpandPersistentVolumes` | `true` | Beta | 1.11 | |
| `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | |
@@ -142,7 +146,7 @@ different Kubernetes components.
| `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 |
| `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 |
| `VolumeScheduling` | `true` | GA | 1.13 | |
| `VolumeSubpathEnvExpansion` | `false` | Alpha | 1.11 | |
| `VolumeSubpathEnvExpansion` | `false` | Alpha | 1.14 | |
| `VolumeSnapshotDataSource` | `false` | Alpha | 1.12 | - |
| `ScheduleDaemonSetPods` | `false` | Alpha | 1.11 | 1.11 |
| `ScheduleDaemonSetPods` | `true` | Beta | 1.12 | |
@@ -309,5 +313,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
enables the usage of [`local`](/docs/concepts/storage/volumes/#local) volume
type when used together with the `PersistentLocalVolumes` feature gate.
- `VolumeSnapshotDataSource`: Enable volume snapshot data source support.
- `VolumeSubpathEnvExpansion`: Enable `subPathExpr` field for expanding environment variables into a `subPath`.

{{% /capture %}}
4 changes: 4 additions & 0 deletions content/en/docs/setup/independent/create-cluster-kubeadm.md
Original file line number Diff line number Diff line change
@@ -117,6 +117,10 @@ communicates with).
be passed to kubeadm initialization. Depending on which
third-party provider you choose, you might need to set the `--pod-network-cidr` to
a provider-specific value. See [Installing a pod network add-on](#pod-network).
1. (Optional) Since version 1.14, kubeadm will try to detect the container runtime on Linux
by using a list of well known domain socket paths. To use different container runtime or
if there are more than one installed on the provisioned node, specify the `--cri-socket`
argument to `kubeadm init`. See [Installing runtime](/docs/setup/independent/install-kubeadm/#installing-runtime).
1. (Optional) Unless otherwise specified, kubeadm uses the network interface associated
with the default gateway to advertise the master's IP. To use a different
network interface, specify the `--apiserver-advertise-address=<ip-address>` argument
22 changes: 20 additions & 2 deletions content/en/docs/setup/independent/install-kubeadm.md
Original file line number Diff line number Diff line change
@@ -79,10 +79,28 @@ The pod network plugin you use (see below) may also require certain ports to be
open. Since this differs with each pod network plugin, please see the
documentation for the plugins about what port(s) those need.

## Installing runtime
## Installing runtime {#installing-runtime}

Since v1.6.0, Kubernetes has enabled the use of CRI, Container Runtime Interface, by default.
The container runtime used by default is Docker, which is enabled through the built-in

Since v1.14.0, kubeadm will try to automatically detect the container runtime on Linux nodes
by scanning through a list of well known domain sockets. The detectable runtimes and the
socket paths, that are used, can be found in the table below.

| Runtime | Domain Socket |
|------------|----------------------------------|
| Docker | /var/run/docker.sock |
| containerd | /run/containerd/containerd.sock |
| CRI-O | /var/run/crio/crio.sock |

If both Docker and containerd are detected together, Docker takes precedence. This is
needed, because Docker 18.09 ships with containerd and both are detectable.
If any other two or more runtimes are detected, kubeadm will exit with an appropriate
error message.

On non-Linux nodes the container runtime used by default is Docker.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
On non-Linux nodes the container runtime used by default is Docker.
On non-Linux nodes, kubeadm defaults to using Docker as the container runtime.

(My suggestion, however original wording is OK too).


If the container runtime of choice is Docker, it is used through the built-in
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If the container runtime of choice is Docker, it is used through the built-in
When using Docker as the container runtime, Kuberenetes relies on a built-in

`dockershim` CRI implementation inside of the `kubelet`.

Other CRI-based runtimes include:
4 changes: 3 additions & 1 deletion content/en/docs/tasks/administer-cluster/kms-provider.md
Original file line number Diff line number Diff line change
@@ -31,7 +31,8 @@ To configure a KMS provider on the API server, include a provider of type ```kms

* `name`: Display name of the KMS plugin.
* `endpoint`: Listen address of the gRPC server (KMS plugin). The endpoint is a UNIX domain socket.
* `cachesize`: Number of data encryption keys (DEKs) to be cached in the clear. When cached, DEKs can be used without another call to the KMS; whereas DEKs that are not cached require a call to the KMS to unwrap..
* `cachesize`: Number of data encryption keys (DEKs) to be cached in the clear. When cached, DEKs can be used without another call to the KMS; whereas DEKs that are not cached require a call to the KMS to unwrap.
* `timeout`: How long should kube-apiserver wait for kms-plugin to respond before returning an error (default is 3 seconds).

See [Understanding the encryption at rest configuration.](/docs/tasks/administer-cluster/encrypt-data)

@@ -89,6 +90,7 @@ resources:
name: myKmsPlugin
endpoint: unix:///tmp/socketfile.sock
cachesize: 100
timeout: 3s
- identity: {}
```
4 changes: 1 addition & 3 deletions content/en/docs/tasks/extend-kubectl/kubectl-plugins.md
Original file line number Diff line number Diff line change
@@ -9,7 +9,7 @@ content_template: templates/task

{{% capture overview %}}

{{< feature-state state="beta" >}}
{{< feature-state state="stable" >}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
{{< feature-state state="stable" >}}
{{< feature-state for_k8s_version="1.14" state="stable" >}}


This guide demonstrates how to install and write extensions for [kubectl](/docs/reference/kubectl/kubectl/). By thinking of core `kubectl` commands as essential building blocks for interacting with a Kubernetes cluster, a cluster administrator can think
of plugins as a means of utilizing these building blocks to create more complex behavior. Plugins extend `kubectl` with new sub-commands, allowing for new and custom features not included in the main distribution of `kubectl`.
@@ -24,8 +24,6 @@ You need to have a working `kubectl` binary installed.
Plugins were officially introduced as an alpha feature in the v1.8.0 release. They have been re-worked in the v1.12.0 release to support a wider range of use-cases. So, while some parts of the plugins feature were already available in previous versions, a `kubectl` version of 1.12.0 or later is recommended if you are following these docs.
{{< /note >}}

Until a GA version is released, plugins should be considered unstable, and their underlying mechanism is prone to change.

{{% /capture %}}

{{% capture steps %}}
Original file line number Diff line number Diff line change
@@ -91,7 +91,7 @@ A common configuration on [Minikube](https://github.com/kubernetes/minikube) and
There is a [walkthrough of how to install this configuration in your cluster](https://blog.kublr.com/how-to-utilize-the-heapster-influxdb-grafana-stack-in-kubernetes-for-monitoring-pods-4a553f4d36c9).
As of Kubernetes 1.11, Heapster is deprecated, as per [sig-instrumentation](https://github.com/kubernetes/community/tree/master/sig-instrumentation). See [Prometheus vs. Heapster vs. Kubernetes Metrics APIs](https://brancz.com/2018/01/05/prometheus-vs-heapster-vs-kubernetes-metrics-apis/) for more information alternatives.

Hosted data analytics services such as [Datadog](https://docs.datadoghq.com/integrations/kubernetes/) also offer Kubernetes integration.
Hosted monitoring, APM, or data analytics services such as [Datadog](https://docs.datadoghq.com/integrations/kubernetes/) or [Instana](https://www.instana.com/supported-integrations/kubernetes-monitoring/) also offer Kubernetes integration.

## Additional resources

Original file line number Diff line number Diff line change
@@ -480,7 +480,7 @@ web-2 k8s.gcr.io/nginx-slim:0.8
`web-0` has had its image updated, but `web-0` and `web-1` still have the original
image. Complete the update by deleting the remaining Pods.

```shell
```shell
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd expect that we didn't change the zh locale documents in this PR. Have I got that right?

kubectl delete pod web-1 web-2
pod "web-1" deleted
pod "web-2" deleted
@@ -489,7 +489,7 @@ pod "web-2" deleted

观察 StatefulSet 的 Pod,等待它们全部变成 Running 和 Ready。

```
```shell
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code block mixes a command and some output (we should fix that to use the official style in a separate PR).
I don't think it's right to mark this as POSIX shell code.

kubectl get pods -w -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 8m
Binary file added static/images/docs/perf-test-result-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/images/docs/perf-test-result-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/images/docs/perf-test-result-3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/images/docs/perf-test-result-4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/images/docs/perf-test-result-5.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/images/docs/perf-test-result-6.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.