diff --git a/.editorconfig b/.editorconfig index 42ff3294fd16a..e49c89c4e8b78 100644 --- a/.editorconfig +++ b/.editorconfig @@ -5,7 +5,7 @@ charset = utf-8 max_line_length = 80 trim_trailing_whitespace = true -[*.{html,js,json,sass,md,mmark,toml,yaml}] +[*.{css,html,js,json,sass,md,mmark,toml,yaml}] indent_style = space indent_size = 2 diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index 9bc9a38f16e83..8a8da8978e5d9 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -51,6 +51,7 @@ aliases: - sftim - steveperry-53 - tengqm + - vineethreddy02 - xiangpengzhao - zacharysarah - zparnold @@ -127,12 +128,10 @@ aliases: - fabriziopandini - mattiaperi - micheleberardi - - rlenferink sig-docs-it-reviews: # PR reviews for Italian content - fabriziopandini - mattiaperi - micheleberardi - - rlenferink sig-docs-ja-owners: # Admins for Japanese content - cstoku - inductor diff --git a/README-zh.md b/README-zh.md index 286db04db5cdc..8a7898774a055 100644 --- a/README-zh.md +++ b/README-zh.md @@ -122,7 +122,7 @@ Open up your browser to http://localhost:1313 to view the website. As you make c -## 使用 Hugo 在本地运行网站 +## 使用 Hugo 在本地运行网站 {#running-the-site-locally-using-hugo} + -Will automatically add or remove running instances of a pod, based on a set value for that pod. Allows the pod to return to the defined number of instances if pods are deleted or if too many are started by mistake. +The control plane ensures that the defined number of Pods are running, even if some +Pods fail, if you delete Pods manually, or if too many are started by mistake. +{{< note >}} +ReplicationController is deprecated. See +{{< glossary_tooltip text="Deployment" term_id="deployment" >}}, which is similar. +{{< /note >}} diff --git a/content/en/docs/reference/glossary/shuffle-sharding.md b/content/en/docs/reference/glossary/shuffle-sharding.md new file mode 100644 index 0000000000000..7d1a128762a7e --- /dev/null +++ b/content/en/docs/reference/glossary/shuffle-sharding.md @@ -0,0 +1,45 @@ +--- +title: shuffle sharding +id: shuffle-sharding +date: 2020-03-04 +full_link: +short_description: > + A technique for assigning requests to queues that provides better isolation than hashing modulo the number of queues. + +aka: +tags: +- fundamental +--- +A technique for assigning requests to queues that provides better isolation than hashing modulo the number of queues. + + + +We are often concerned with insulating different flows of requests +from each other, so that a high-intensity flow does not crowd out low-intensity flows. +A simple way to put requests into queues is to hash some +characteristics of the request, modulo the number of queues, to get +the index of the queue to use. The hash function uses as input +characteristics of the request that align with flows. For example, in +the Internet this is often the 5-tuple of source and destination +address, protocol, and source and destination port. + +That simple hash-based scheme has the property that any high-intensity flow +will crowd out all the low-intensity flows that hash to the same queue. +Providing good insulation for a large number of flows requires a large +number of queues, which is problematic. Shuffle sharding is a more +nimble technique that can do a better job of insulating the low-intensity +flows from the high-intensity flows. The terminology of shuffle sharding uses +the metaphor of dealing a hand from a deck of cards; each queue is a +metaphorical card. The shuffle sharding technique starts with hashing +the flow-identifying characteristics of the request, to produce a hash +value with dozens or more of bits. Then the hash value is used as a +source of entropy to shuffle the deck and deal a hand of cards +(queues). All the dealt queues are examined, and the request is put +into one of the examined queues with the shortest length. With a +modest hand size, it does not cost much to examine all the dealt cards +and a given low-intensity flow has a good chance to dodge the effects of a +given high-intensity flow. With a large hand size it is expensive to examine +the dealt queues and more difficult for the low-intensity flows to dodge the +collective effects of a set of high-intensity flows. Thus, the hand size +should be chosen judiciously. + diff --git a/content/en/docs/reference/issues-security/security.md b/content/en/docs/reference/issues-security/security.md index e66cad55d159c..709f26ffe1b56 100644 --- a/content/en/docs/reference/issues-security/security.md +++ b/content/en/docs/reference/issues-security/security.md @@ -17,7 +17,7 @@ This page describes Kubernetes security and disclosure information. {{% capture body %}} ## Security Announcements -Join the [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) group for emails about security and major API announcements. +Join the [kubernetes-security-announce](https://groups.google.com/forum/#!forum/kubernetes-security-announce) group for emails about security and major API announcements. You can also subscribe to an RSS feed of the above using [this link](https://groups.google.com/forum/feed/kubernetes-announce/msgs/rss_v2_0.xml?num=50). diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index 8401d00ecf7df..5e10e88ece1b6 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -204,7 +204,6 @@ kubectl diff -f ./my-manifest.yaml ## Updating Resources -As of version 1.11 `rolling-update` have been deprecated (see [CHANGELOG-1.11.md](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.11.md)), use `rollout` instead. ```bash kubectl set image deployment/frontend www=image:v2 # Rolling update "www" containers of "frontend" deployment, updating the image @@ -215,12 +214,6 @@ kubectl rollout status -w deployment/frontend # Watch rolling kubectl rollout restart deployment/frontend # Rolling restart of the "frontend" deployment -# deprecated starting version 1.11 -kubectl rolling-update frontend-v1 -f frontend-v2.json # (deprecated) Rolling update pods of frontend-v1 -kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 # (deprecated) Change the name of the resource and update the image -kubectl rolling-update frontend --image=image:v2 # (deprecated) Update the pods image of frontend -kubectl rolling-update frontend-v1 frontend-v2 --rollback # (deprecated) Abort existing rollout in progress - cat pod.json | kubectl replace -f - # Replace a pod based on the JSON passed into std # Force replace, delete and then re-create the resource. Will cause a service outage. diff --git a/content/en/docs/reference/kubectl/kubectl.md b/content/en/docs/reference/kubectl/kubectl.md index a4ac90c5137f8..75ddc04715d59 100644 --- a/content/en/docs/reference/kubectl/kubectl.md +++ b/content/en/docs/reference/kubectl/kubectl.md @@ -460,6 +460,13 @@ kubectl [flags] database username + + --tls-server-name string + + + Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used + + --token string @@ -518,6 +525,7 @@ kubectl [flags] {{% capture seealso %}} +* [kubectl alpha](/docs/reference/generated/kubectl/kubectl-commands#alpha) - Commands for features in alpha * [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands#annotate) - Update the annotations on a resource * [kubectl api-resources](/docs/reference/generated/kubectl/kubectl-commands#api-resources) - Print the supported API resources on the server * [kubectl api-versions](/docs/reference/generated/kubectl/kubectl-commands#api-versions) - Print the supported API versions on the server, in the form of "group/version" diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md index 2efae589e19cd..1bb82cf96237a 100644 --- a/content/en/docs/reference/kubectl/overview.md +++ b/content/en/docs/reference/kubectl/overview.md @@ -79,7 +79,7 @@ Operation | Syntax | Description `create` | `kubectl create -f FILENAME [flags]` | Create one or more resources from a file or stdin. `delete` | kubectl delete (-f FILENAME | TYPE [NAME | /NAME | -l label | --all]) [flags] | Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources. `describe` | kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | /NAME | -l label]) [flags] | Display the detailed state of one or more resources. -`diff` | `kubectl diff -f FILENAME [flags]`| Diff file or stdin against live configuration (**BETA**) +`diff` | `kubectl diff -f FILENAME [flags]`| Diff file or stdin against live configuration. `edit` | kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags] | Edit and update the definition of one or more resources on the server by using the default editor. `exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | Execute a command against a container in a pod. `explain` | `kubectl explain [--recursive=false] [flags]` | Get documentation of various resources. For instance pods, nodes, services, etc. @@ -91,8 +91,7 @@ Operation | Syntax | Description `port-forward` | `kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [flags]` | Forward one or more local ports to a pod. `proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | Run a proxy to the Kubernetes API server. `replace` | `kubectl replace -f FILENAME` | Replace a resource from a file or stdin. -`rolling-update` | kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC) [flags] | Perform a rolling update by gradually replacing the specified replication controller and its pods. -`run` | `kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [flags]` | Run a specified image on the cluster. +`run` | `kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=server|client|none] [--overrides=inline-json] [flags]` | Run a specified image on the cluster. `scale` | kubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags] | Update the size of the specified replication controller. `version` | `kubectl version [--client] [flags]` | Display the Kubernetes version running on the client and server. @@ -370,6 +369,16 @@ kubectl logs kubectl logs -f ``` +`kubectl diff` - View a diff of the proposed updates to a cluster. + +```shell +# Diff resources included in "pod.json". +kubectl diff -f pod.json + +# Diff file read from stdin. +cat service.yaml | kubectl diff -f - +``` + ## Examples: Creating and using plugins Use the following set of examples to help you familiarize yourself with writing and using `kubectl` plugins: diff --git a/content/en/docs/reference/kubernetes-api/api-index.md b/content/en/docs/reference/kubernetes-api/api-index.md index 2d1a45b225e46..60d24e906b771 100644 --- a/content/en/docs/reference/kubernetes-api/api-index.md +++ b/content/en/docs/reference/kubernetes-api/api-index.md @@ -1,6 +1,6 @@ --- -title: v1.17 +title: v1.18 weight: 50 --- -[Kubernetes API v1.17](/docs/reference/generated/kubernetes-api/v1.17/) +[Kubernetes API v1.18](/docs/reference/generated/kubernetes-api/v1.18/) diff --git a/content/en/docs/reference/scheduling/_index.md b/content/en/docs/reference/scheduling/_index.md new file mode 100644 index 0000000000000..316b774081953 --- /dev/null +++ b/content/en/docs/reference/scheduling/_index.md @@ -0,0 +1,5 @@ +--- +title: Scheduling +weight: 70 +toc-hide: true +--- diff --git a/content/en/docs/reference/scheduling/policies.md b/content/en/docs/reference/scheduling/policies.md new file mode 100644 index 0000000000000..23d0bc915efc9 --- /dev/null +++ b/content/en/docs/reference/scheduling/policies.md @@ -0,0 +1,125 @@ +--- +title: Scheduling Policies +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +A scheduling Policy can be used to specify the *predicates* and *priorities* +that the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} +runs to [filter and score nodes](/docs/concepts/scheduling/kube-scheduler/#kube-scheduler-implementation), +respectively. + +You can set a scheduling policy by running +`kube-scheduler --policy-config-file ` or +`kube-scheduler --policy-configmap ` +and using the [Policy type](https://pkg.go.dev/k8s.io/kube-scheduler@v0.18.0/config/v1?tab=doc#Policy). + +{{% /capture %}} + +{{% capture body %}} + +## Predicates + +The following *predicates* implement filtering: + +- `PodFitsHostPorts`: Checks if a Node has free ports (the network protocol kind) + for the Pod ports the Pod is requesting. + +- `PodFitsHost`: Checks if a Pod specifies a specific Node by its hostname. + +- `PodFitsResources`: Checks if the Node has free resources (eg, CPU and Memory) + to meet the requirement of the Pod. + +- `PodMatchNodeSelector`: Checks if a Pod's Node {{< glossary_tooltip term_id="selector" >}} + matches the Node's {{< glossary_tooltip text="label(s)" term_id="label" >}}. + +- `NoVolumeZoneConflict`: Evaluate if the {{< glossary_tooltip text="Volumes" term_id="volume" >}} + that a Pod requests are available on the Node, given the failure zone restrictions for + that storage. + +- `NoDiskConflict`: Evaluates if a Pod can fit on a Node due to the volumes it requests, + and those that are already mounted. + +- `MaxCSIVolumeCount`: Decides how many {{< glossary_tooltip text="CSI" term_id="csi" >}} + volumes should be attached, and whether that's over a configured limit. + +- `CheckNodeMemoryPressure`: If a Node is reporting memory pressure, and there's no + configured exception, the Pod won't be scheduled there. + +- `CheckNodePIDPressure`: If a Node is reporting that process IDs are scarce, and + there's no configured exception, the Pod won't be scheduled there. + +- `CheckNodeDiskPressure`: If a Node is reporting storage pressure (a filesystem that + is full or nearly full), and there's no configured exception, the Pod won't be + scheduled there. + +- `CheckNodeCondition`: Nodes can report that they have a completely full filesystem, + that networking isn't available or that kubelet is otherwise not ready to run Pods. + If such a condition is set for a Node, and there's no configured exception, the Pod + won't be scheduled there. + +- `PodToleratesNodeTaints`: checks if a Pod's {{< glossary_tooltip text="tolerations" term_id="toleration" >}} + can tolerate the Node's {{< glossary_tooltip text="taints" term_id="taint" >}}. + +- `CheckVolumeBinding`: Evaluates if a Pod can fit due to the volumes it requests. + This applies for both bound and unbound + {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}}. + +## Priorities + +The following *priorities* implement scoring: + +- `SelectorSpreadPriority`: Spreads Pods across hosts, considering Pods that + belong to the same {{< glossary_tooltip text="Service" term_id="service" >}}, + {{< glossary_tooltip term_id="statefulset" >}} or + {{< glossary_tooltip term_id="replica-set" >}}. + +- `InterPodAffinityPriority`: Implements preferred + [inter pod affininity and antiaffinity](/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity). + +- `LeastRequestedPriority`: Favors nodes with fewer requested resources. In other + words, the more Pods that are placed on a Node, and the more resources those + Pods use, the lower the ranking this policy will give. + +- `MostRequestedPriority`: Favors nodes with most requested resources. This policy + will fit the scheduled Pods onto the smallest number of Nodes needed to run your + overall set of workloads. + +- `RequestedToCapacityRatioPriority`: Creates a requestedToCapacity based ResourceAllocationPriority using default resource scoring function shape. + +- `BalancedResourceAllocation`: Favors nodes with balanced resource usage. + +- `NodePreferAvoidPodsPriority`: Prioritizes nodes according to the node annotation + `scheduler.alpha.kubernetes.io/preferAvoidPods`. You can use this to hint that + two different Pods shouldn't run on the same Node. + +- `NodeAffinityPriority`: Prioritizes nodes according to node affinity scheduling + preferences indicated in PreferredDuringSchedulingIgnoredDuringExecution. + You can read more about this in [Assigning Pods to Nodes](/docs/concepts/configuration/assign-pod-node/). + +- `TaintTolerationPriority`: Prepares the priority list for all the nodes, based on + the number of intolerable taints on the node. This policy adjusts a node's rank + taking that list into account. + +- `ImageLocalityPriority`: Favors nodes that already have the + {{< glossary_tooltip text="container images" term_id="image" >}} for that + Pod cached locally. + +- `ServiceSpreadingPriority`: For a given Service, this policy aims to make sure that + the Pods for the Service run on different nodes. It favours scheduling onto nodes + that don't have Pods for the service already assigned there. The overall outcome is + that the Service becomes more resilient to a single Node failure. + +- `EqualPriority`: Gives an equal weight of one to all nodes. + +- `EvenPodsSpreadPriority`: Implements preferred + [pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). + +{{% /capture %}} + +{{% capture whatsnext %}} +* Learn about [scheduling](/docs/concepts/scheduling/kube-scheduler/) +* Learn about [kube-scheduler profiles](/docs/reference/scheduling/profiles/) +{{% /capture %}} diff --git a/content/en/docs/reference/scheduling/profiles.md b/content/en/docs/reference/scheduling/profiles.md new file mode 100644 index 0000000000000..f5595f8480bdf --- /dev/null +++ b/content/en/docs/reference/scheduling/profiles.md @@ -0,0 +1,181 @@ +--- +title: Scheduling Profiles +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.18" state="alpha" >}} + +A scheduling Profile allows you to configure the different stages of scheduling +in the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}. +Each stage is exposed in a extension point. Plugins provide scheduling behaviors +by implementing one or more of these extension points. + +You can specify scheduling profiles by running `kube-scheduler --config `, +using the component config APIs +([`v1alpha1`](https://pkg.go.dev/k8s.io/kube-scheduler@{{< param "fullversion" >}}/config/v1alpha1?tab=doc#KubeSchedulerConfiguration) +or [`v1alpha2`](https://pkg.go.dev/k8s.io/kube-scheduler@{{< param "fullversion" >}}/config/v1alpha2?tab=doc#KubeSchedulerConfiguration)). +The `v1alpha2` API allows you to configure kube-scheduler to run +[multiple profiles](#multiple-profiles). + +{{% /capture %}} + +{{% capture body %}} + +## Extension points + +Scheduling happens in a series of stages that are exposed through the following +extension points: + +1. `QueueSort`: These plugins provide an ordering function that is used to + sort pending Pods in the scheduling queue. Exactly one queue sort plugin + may be enabled at a time. +1. `PreFilter`: These plugins are used to pre-process or check information + about a Pod or the cluster before filtering. +1. `Filter`: These plugins are the equivalent of Predicates in a scheduling + Policy and are used to filter out nodes that can not run the Pod. Filters + are called in the configured order. +1. `PreScore`: This is an informational extension point that can be used + for doing pre-scoring work. +1. `Score`: These plugins provide a score to each node that has passed the + filtering phase. The scheduler will then select the node with the highest + weighted scores sum. +1. `Reserve`: This is an informational extension point that notifies plugins + when resources have being reserved for a given Pod. +1. `Permit`: These plugins can prevent or delay the binding of a Pod. +1. `PreBind`: These plugins perform any work required before a Pod is bound. +1. `Bind`: The plugins bind a Pod to a Node. Bind plugins are called in order + and once one has done the binding, the remaining plugins are skipped. At + least one bind plugin is required. +1. `PostBind`: This is an informational extension point that is called after + a Pod has been bound. +1. `UnReserve`: This is an informational extension point that is called if + a Pod is rejected after being reserved and put on hold by a `Permit` plugin. + +## Scheduling plugins + +The following plugins, enabled by default, implement one or more of these +extension points: + +- `DefaultTopologySpread`: Favors spreading across nodes for Pods that belong to + {{< glossary_tooltip text="Services" term_id="service" >}}, + {{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}} and + {{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}} + Extension points: `PreScore`, `Score`. +- `ImageLocality`: Favors nodes that already have the container images that the + Pod runs. + Extension points: `Score`. +- `TaintToleration`: Implements + [taints and tolerations](/docs/concepts/configuration/taint-and-toleration/). + Implements extension points: `Filter`, `Prescore`, `Score`. +- `NodeName`: Checks if a Pod spec node name matches the current node. + Extension points: `Filter`. +- `NodePorts`: Checks if a node has free ports for the requested Pod ports. + Extension points: `PreFilter`, `Filter`. +- `NodePreferAvoidPods`: Scores nodes according to the node + {{< glossary_tooltip text="annotation" term_id="annotation" >}} + `scheduler.alpha.kubernetes.io/preferAvoidPods`. + Extension points: `Score`. +- `NodeAffinity`: Implements + [node selectors](/docs/concepts/configuration/assign-pod-node/#nodeselector) + and [node affinity](/docs/concepts/configuration/assign-pod-node/#node-affinity). + Extension points: `Filter`, `Score`. +- `PodTopologySpread`: Implements + [Pod topology spread](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). + Extension points: `PreFilter`, `Filter`, `PreScore`, `Score`. +- `NodeUnschedulable`: Filters out nodes that have `.spec.unschedulable` set to + true. + Extension points: `Filter`. +- `NodeResourcesFit`: Checks if the node has all the resources that the Pod is + requesting. + Extension points: `PreFilter`, `Filter`. +- `NodeResourcesBallancedAllocation`: Favors nodes that would obtain a more + balanced resource usage if the Pod is scheduled there. + Extension points: `Score`. +- `NodeResourcesLeastAllocated`: Favors nodes that have a low allocation of + resources. + Extension points: `Score`. +- `VolumeBinding`: Checks if the node has or if it can bind the requested + {{< glossary_tooltip text="volumes" term_id="volume" >}}. + Extension points: `Filter`. +- `VolumeRestrictions`: Checks that volumes mounted in the node satisfy + restrictions that are specific to the volume provider. + Extension points: `Filter`. +- `VolumeZone`: Checks that volumes requested satisfy any zone requirements they + might have. + Extension points: `Filter`. +- `NodeVolumeLimits`: Checks that CSI volume limits can be satisfied for the + node. + Extension points: `Filter`. +- `EBSLimits`: Checks that AWS EBS volume limits can be satisfied for the node. + Extension points: `Filter`. +- `GCEPDLimits`: Checks that GCP-PD volume limits can be satisfied for the node. + Extension points: `Filter`. +- `AzureDiskLimits`: Checks that Azure disk volume limits can be satisfied for + the node. + Extension points: `Filter`. +- `InterPodAffinity`: Implements + [inter-Pod affinity and anti-affinity](/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity). + Extension points: `PreFilter`, `Filter`, `PreScore`, `Score`. +- `PrioritySort`: Provides the default priority based sorting. + Extension points: `QueueSort`. +- `DefaultBinder`: Provides the default binding mechanism. + Extension points: `Bind`. + +You can also enable the following plugins, through the component config APIs, +that are not enabled by default: + +- `NodeResourcesMostAllocated`: Favors nodes that have a high allocation of + resources. + Extension points: `Score`. +- `RequestedToCapacityRatio`: Favor nodes according to a configured function of + the allocated resources. + Extension points: `Score`. +- `NodeResourceLimits`: Favors nodes that satisfy the Pod resource limits. + Extension points: `PreScore`, `Score`. +- `CinderVolume`: Checks that OpenStack Cinder volume limits can be satisfied + for the node. + Extension points: `Filter`. +- `NodeLabel`: Filters and / or scores a node according to configured + {{< glossary_tooltip text="label(s)" term_id="label" >}}. + Extension points: `Filter`, `Score`. +- `ServiceAffinity`: Checks that Pods that belong to a + {{< glossary_tooltip term_id="service" >}} fit in a set of nodes defined by + configured labels. This plugin also favors spreading the Pods belonging to a + Service across nodes. + Extension points: `PreFilter`, `Filter`, `Score`. + +## Multiple profiles + +When using the component config API v1alpha2, a scheduler can be configured to +run more than one profile. Each profile has an associated scheduler name. +Pods that want to be scheduled according to a specific profile can include +the corresponding scheduler name in its `.spec.schedulerName`. + +By default, one profile with the scheduler name `default-scheduler` is created. +This profile includes the default plugins described above. When declaring more +than one profile, a unique scheduler name for each of them is required. + +If a Pod doesn't specify a scheduler name, kube-apiserver will set it to +`default-scheduler`. Therefore, a profile with this scheduler name should exist +to get those pods scheduled. + +{{< note >}} +Pod's scheduling events have `.spec.schedulerName` as the ReportingController. +Events for leader election use the scheduler name of the first profile in the +list. +{{< /note >}} + +{{< note >}} +All profiles must use the same plugin in the QueueSort extension point and have +the same configuration parameters (if applicable). This is because the scheduler +only has one pending pods queue. +{{< /note >}} + +{{% /capture %}} + +{{% capture whatsnext %}} +* Learn about [scheduling](/docs/concepts/scheduling/kube-scheduler/) +{{% /capture %}} diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_admin.conf.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_admin.conf.md index cb532d9b98026..bed37769d03e0 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_admin.conf.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_admin.conf.md @@ -65,13 +65,6 @@ kubeadm alpha certs renew admin.conf [flags] The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. - - --use-api - - - Use the Kubernetes certificate API to renew certificates - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_all.md index dc10f4190fc33..be586b8e4b2ab 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_all.md @@ -59,13 +59,6 @@ kubeadm alpha certs renew all [flags] The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. - - --use-api - - - Use the Kubernetes certificate API to renew certificates - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-etcd-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-etcd-client.md index 0ce4b3aac9133..33113474a3968 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-etcd-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-etcd-client.md @@ -65,13 +65,6 @@ kubeadm alpha certs renew apiserver-etcd-client [flags] The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. - - --use-api - - - Use the Kubernetes certificate API to renew certificates - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-kubelet-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-kubelet-client.md index c1b9777480834..5123a9a0e10c8 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-kubelet-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver-kubelet-client.md @@ -65,13 +65,6 @@ kubeadm alpha certs renew apiserver-kubelet-client [flags] The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. - - --use-api - - - Use the Kubernetes certificate API to renew certificates - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver.md index 63dc1b4fc2724..7dda656795560 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_apiserver.md @@ -65,13 +65,6 @@ kubeadm alpha certs renew apiserver [flags] The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. - - --use-api - - - Use the Kubernetes certificate API to renew certificates - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_controller-manager.conf.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_controller-manager.conf.md index bb208fa1b4f1b..9e33b47bc45ba 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_controller-manager.conf.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_controller-manager.conf.md @@ -65,13 +65,6 @@ kubeadm alpha certs renew controller-manager.conf [flags] The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. - - --use-api - - - Use the Kubernetes certificate API to renew certificates - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-healthcheck-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-healthcheck-client.md index 57f86e1874037..12c57913dcad2 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-healthcheck-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-healthcheck-client.md @@ -65,13 +65,6 @@ kubeadm alpha certs renew etcd-healthcheck-client [flags] The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. - - --use-api - - - Use the Kubernetes certificate API to renew certificates - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-peer.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-peer.md index 2b86d657b6d41..3fa0f3fd52946 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-peer.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-peer.md @@ -65,13 +65,6 @@ kubeadm alpha certs renew etcd-peer [flags] The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. - - --use-api - - - Use the Kubernetes certificate API to renew certificates - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-server.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-server.md index 827febf1a9cc1..3484542725b4f 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-server.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_etcd-server.md @@ -65,13 +65,6 @@ kubeadm alpha certs renew etcd-server [flags] The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. - - --use-api - - - Use the Kubernetes certificate API to renew certificates - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_front-proxy-client.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_front-proxy-client.md index 2945b4dafa1c5..1bfc2f1d312fb 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_front-proxy-client.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_front-proxy-client.md @@ -65,13 +65,6 @@ kubeadm alpha certs renew front-proxy-client [flags] The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. - - --use-api - - - Use the Kubernetes certificate API to renew certificates - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_scheduler.conf.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_scheduler.conf.md index f4970fde9cb42..77537a7452548 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_scheduler.conf.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_certs_renew_scheduler.conf.md @@ -65,13 +65,6 @@ kubeadm alpha certs renew scheduler.conf [flags] The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. - - --use-api - - - Use the Kubernetes certificate API to renew certificates - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config_enable-dynamic.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config_enable-dynamic.md index 379a01f535c0d..88fb003f6833e 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config_enable-dynamic.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_alpha_kubelet_config_enable-dynamic.md @@ -16,7 +16,7 @@ kubeadm alpha kubelet config enable-dynamic [flags] ``` # Enable dynamic kubelet configuration for a Node. - kubeadm alpha phase kubelet enable-dynamic-config --node-name node-1 --kubelet-version 1.17.0 + kubeadm alpha phase kubelet enable-dynamic-config --node-name node-1 --kubelet-version 1.18.0 WARNING: This feature is still experimental, and disabled by default. Enable only if you know what you are doing, as it may have surprising side-effects at this stage. diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md index 61894d48dddc3..c0b924e5d9a04 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_list.md @@ -17,6 +17,13 @@ kubeadm config images list [flags] + + --allow-missing-template-keys     Default: true + + + If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. + + --config string @@ -24,11 +31,18 @@ kubeadm config images list [flags] Path to a kubeadm configuration file. + + -o, --experimental-output string     Default: "text" + + + Output format. One of: text|json|yaml|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file. + + --feature-gates string - A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false) + A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false) diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md index 2c5cbaca25c11..2a03893d45a40 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_config_images_pull.md @@ -35,7 +35,7 @@ kubeadm config images pull [flags] --feature-gates string - A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false) + A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false) diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md index 7cc5bbb078c74..d19bb01a99c18 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init.md @@ -132,7 +132,7 @@ kubeadm init [flags] --feature-gates string - A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false) + A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false) diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md index f649bd04d8d9c..ff285596d5204 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_all.md @@ -49,7 +49,7 @@ kubeadm init phase addon all [flags] --feature-gates string - A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false) + A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false) diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md index 9da2cf2bd355d..40bc2e8101724 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_coredns.md @@ -28,7 +28,7 @@ kubeadm init phase addon coredns [flags] --feature-gates string - A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false) + A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false) diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md index c22fc6141a7f1..fa735c27effc9 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_all.md @@ -88,7 +88,7 @@ kubeadm init phase control-plane all [flags] --feature-gates string - A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false) + A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false) diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md index 9444b664cd1b8..06348123864a5 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_control-plane_apiserver.md @@ -70,7 +70,7 @@ kubeadm init phase control-plane apiserver [flags] --feature-gates string - A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false) + A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false) diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md index debdc2485e7ad..b6b9f6d261895 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md @@ -70,7 +70,7 @@ kubeadm upgrade apply [version] --feature-gates string - A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false) + A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false) diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node.md index 0f7e472655efe..7ec3ff6bbc719 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node.md @@ -66,13 +66,6 @@ kubeadm upgrade node [flags] The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. - - --kubelet-version string - - - The *desired* version for the kubelet config after the upgrade. If not specified, the KubernetesVersion from the kubeadm-config ConfigMap will be used - - --skip-phases stringSlice diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_kubelet-config.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_kubelet-config.md index 47ba9ada499a0..4b90ef8f344da 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_kubelet-config.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_node_phase_kubelet-config.md @@ -2,7 +2,7 @@ ### Synopsis -Download the kubelet configuration from a ConfigMap of the form "kubelet-config-1.X" in the cluster, where X is the minor version of the kubelet. kubeadm uses the KuberneteVersion field in the kubeadm-config ConfigMap to determine what the _desired_ kubelet version is, but the user can override this by using the --kubelet-version parameter. +Download the kubelet configuration from a ConfigMap of the form "kubelet-config-1.X" in the cluster, where X is the minor version of the kubelet. kubeadm uses the KuberneteVersion field in the kubeadm-config ConfigMap to determine what the _desired_ kubelet version is. ``` kubeadm upgrade node phase kubelet-config [flags] @@ -38,13 +38,6 @@ kubeadm upgrade node phase kubelet-config [flags] The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. - - --kubelet-version string - - - The *desired* version for the kubelet config after the upgrade. If not specified, the KubernetesVersion from the kubeadm-config ConfigMap will be used - - diff --git a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md index d69233d4fc182..569e2bf8ae25d 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md +++ b/content/en/docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md @@ -42,7 +42,7 @@ kubeadm upgrade plan [version] [flags] --feature-gates string - A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false) + A set of key=value pairs that describe feature gates for various features. Options are:
IPv6DualStack=true|false (ALPHA - default=false)
PublicKeysECDSA=true|false (ALPHA - default=false) diff --git a/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md b/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md index ca02a61dac5ca..7186f28071179 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md +++ b/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md @@ -447,21 +447,11 @@ A ServiceAccount for `kube-proxy` is created in the `kube-system` namespace; the #### DNS -Note that: - +- In Kubernetes version 1.18 kube-dns usage with kubeadm is deprecated and will be removed in a future release - The CoreDNS service is named `kube-dns`. This is done to prevent any interruption in service when the user is switching the cluster DNS from kube-dns to CoreDNS or vice-versa -- In Kubernetes version 1.10 and earlier, you must enable CoreDNS with `--feature-gates=CoreDNS=true` -- In Kubernetes version 1.11 and 1.12, CoreDNS is the default DNS server and you must -invoke kubeadm with `--feature-gates=CoreDNS=false` to install kube-dns instead -- In Kubernetes version 1.13 and later, the `CoreDNS` feature gate is no longer available and kube-dns can be installed using the `--config` method described [here](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-addon) - - -A ServiceAccount for CoreDNS/kube-dns is created in the `kube-system` namespace. - -Deploy the `kube-dns` Deployment and Service: - -- It's the upstream CoreDNS deployment relatively unmodified +the `--config` method described [here](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-addon) +- A ServiceAccount for CoreDNS/kube-dns is created in the `kube-system` namespace. - The `kube-dns` ServiceAccount is bound to the privileges in the `system:kube-dns` ClusterRole ## kubeadm join phases internal design diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md index 5db402766d4bf..c6374e54e8689 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init-phase.md @@ -157,6 +157,8 @@ dns: type: "kube-dns" ``` +Please note that kube-dns usage with kubeadm is deprecated as of v1.18 and will be removed in a future release. + For more details on each field in the `v1beta2` configuration you can navigate to our [API reference pages.] (https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2) diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md index 5c10b0ce73303..9b006d15c012d 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -67,10 +67,14 @@ following steps: 1. Installs a DNS server (CoreDNS) and the kube-proxy addon components via the API server. In Kubernetes version 1.11 and later CoreDNS is the default DNS server. - To install kube-dns instead of CoreDNS, the DNS addon has to be configured in the kubeadm `ClusterConfiguration`. For more information about the configuration see the section - `Using kubeadm init with a configuration file` below. + To install kube-dns instead of CoreDNS, the DNS addon has to be configured in the kubeadm `ClusterConfiguration`. + For more information about the configuration see the section `Using kubeadm init with a configuration file` below. Please note that although the DNS server is deployed, it will not be scheduled until CNI is installed. + {{< warning >}} + kube-dns usage with kubeadm is deprecated as of v1.18 and will be removed in a future release. + {{< /warning >}} + ### Using init phases with kubeadm {#init-phases} Kubeadm allows you to create a control-plane node in phases using the `kubeadm init phase` command. diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index 8c7464d6948b2..fbdbf14f878d5 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -336,12 +336,14 @@ Once the last finalizer is removed, the resource is actually removed from etcd. ## Dry-run -{{< feature-state for_k8s_version="v1.13" state="beta" >}} In version 1.13, the dry-run beta feature is enabled by default. The modifying verbs (`POST`, `PUT`, `PATCH`, and `DELETE`) can accept requests in a dry-run mode. DryRun mode helps to evaluate a request through the typical request stages (admission chain, validation, merge conflicts) up until persisting objects to storage. The response body for the request is as close as possible to a non-dry-run response. The system guarantees that dry-run requests will not be persisted in storage or have any other side effects. + {{< feature-state for_k8s_version="v1.18" state="stable" >}} + +The modifying verbs (`POST`, `PUT`, `PATCH`, and `DELETE`) can accept requests in a _dry run_ mode. Dry run mode helps to evaluate a request through the typical request stages (admission chain, validation, merge conflicts) up until persisting objects to storage. The response body for the request is as close as possible to a non-dry-run response. The system guarantees that dry-run requests will not be persisted in storage or have any other side effects. ### Make a dry-run request -Dry-run is triggered by setting the `dryRun` query parameter. This parameter is a string, working as an enum, and in 1.13 the only accepted values are: +Dry-run is triggered by setting the `dryRun` query parameter. This parameter is a string, working as an enum, and the only accepted values are: * `All`: Every stage runs as normal, except for the final storage stage. Admission controllers are run to check that the request is valid, mutating controllers mutate the request, merge is performed on `PATCH`, fields are defaulted, and schema validation occurs. The changes are not persisted to the underlying storage, but the final object which would have been persisted is still returned to the user, along with the normal status code. If the request would trigger an admission controller which would have side effects, the request will be failed rather than risk an unwanted side effect. All built in admission control plugins support dry-run. Additionally, admission webhooks can declare in their [configuration object](/docs/reference/generated/kubernetes-api/v1.13/#webhook-v1beta1-admissionregistration-k8s-io) that they do not have side effects by setting the sideEffects field to "None". If a webhook actually does have side effects, then the sideEffects field should be set to "NoneOnDryRun", and the webhook should also be modified to understand the `DryRun` field in AdmissionReview, and prevent side effects on dry-run requests. * Leave the value empty, which is also the default: Keep the default modifying behavior. @@ -386,6 +388,8 @@ Some values of an object are typically generated before the object is persisted. {{< feature-state for_k8s_version="v1.16" state="beta" >}} +{{< note >}}Starting from Kubernetes v1.18, if you have Server Side Apply enabled then the control plane tracks managed fields for all newly created objects.{{< /note >}} + ### Introduction Server Side Apply helps users and controllers manage their resources via @@ -515,6 +519,13 @@ content type `application/apply-patch+yaml`) and `Update` (all other operations which modify the object). Both operations update the `managedFields`, but behave a little differently. +{{< note >}} +Whether you are submitting JSON data or YAML data, use `application/apply-patch+yaml` as the +Content-Type header value. + +All JSON documents are valid YAML. +{{< /note >}} + For instance, only the apply operation fails on conflicts while update does not. Also, apply operations are required to identify themselves by providing a `fieldManager` query parameter, while the query parameter is optional for update @@ -626,8 +637,9 @@ case. With the Server Side Apply feature enabled, the `PATCH` endpoint accepts the additional `application/apply-patch+yaml` content type. Users of Server Side -Apply can send partially specified objects to this endpoint. An applied config -should always include every field that the applier has an opinion about. +Apply can send partially specified objects as YAML to this endpoint. +When applying a configuration, one should always include all the fields +that they have an opinion about. ### Clearing ManagedFields @@ -661,6 +673,11 @@ the managedFields, this will result in the managedFields being reset first and the other changes being processed afterwards. As a result the applier takes ownership of any fields updated in the same request. +{{< caution >}} Server Side Apply does not correctly track ownership on +sub-resources that don't receive the resource object type. If you are +using Server Side Apply with such a sub-resource, the changed fields +won't be tracked. {{< /caution >}} + ### Disabling the feature Server Side Apply is a beta feature, so it is enabled by default. To turn this diff --git a/content/en/docs/setup/learning-environment/minikube.md b/content/en/docs/setup/learning-environment/minikube.md index 439e4a10a2920..038a5cfa49b9a 100644 --- a/content/en/docs/setup/learning-environment/minikube.md +++ b/content/en/docs/setup/learning-environment/minikube.md @@ -187,24 +187,26 @@ example, to run version {{< param "fullversion" >}}, you would run the following minikube start --kubernetes-version {{< param "fullversion" >}} ``` #### Specifying the VM driver -You can change the VM driver by adding the `--vm-driver=` flag to `minikube start`. +You can change the VM driver by adding the `--driver=` flag to `minikube start`. For example the command would be. ```shell -minikube start --vm-driver= +minikube start --driver= ``` Minikube supports the following drivers: {{< note >}} - See [DRIVERS](https://git.k8s.io/minikube/docs/drivers.md) for details on supported drivers and how to install + See [DRIVERS](https://minikube.sigs.k8s.io/docs/reference/drivers/) for details on supported drivers and how to install plugins. {{< /note >}} * virtualbox * vmwarefusion -* kvm2 ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#kvm2-driver)) -* hyperkit ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#hyperkit-driver)) -* hyperv ([driver installation](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperv-driver)) +* docker (EXPERIMENTAL) +* kvm2 ([driver installation](https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/)) +* hyperkit ([driver installation](https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/)) +* hyperv ([driver installation](https://minikube.sigs.k8s.io/docs/reference/drivers/hyperv/)) Note that the IP below is dynamic and can change. It can be retrieved with `minikube ip`. -* vmware ([driver installation](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#vmware-unified-driver)) (VMware unified driver) +* vmware ([driver installation](https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/)) (VMware unified driver) +* parallels ([driver installation](https://minikube.sigs.k8s.io/docs/reference/drivers/parallels/)) * none (Runs the Kubernetes components on the host and not in a virtual machine. You need to be running Linux and to have {{< glossary_tooltip term_id="docker" >}} installed.) {{< caution >}} @@ -330,8 +332,8 @@ Starting the cluster again will restore it to its previous state. The `minikube delete` command can be used to delete your cluster. This command shuts down and deletes the Minikube Virtual Machine. No data or state is preserved. -### Upgrading minikube -See [upgrade minikube](https://minikube.sigs.k8s.io/docs/start/macos/) +### Upgrading Minikube +If you are using macOS, see [Upgrading Minikube](https://minikube.sigs.k8s.io/docs/start/macos/#upgrading-minikube) to upgrade your existing minikube installation. ## Interacting with Your Cluster diff --git a/content/en/docs/setup/production-environment/container-runtimes.md b/content/en/docs/setup/production-environment/container-runtimes.md index f51ada8f6d80c..972bf1810bc7d 100644 --- a/content/en/docs/setup/production-environment/container-runtimes.md +++ b/content/en/docs/setup/production-environment/container-runtimes.md @@ -64,7 +64,7 @@ is to drain the Node from its workloads, remove it from the cluster and re-join ## Docker On each of your machines, install Docker. -Version 19.03.4 is recommended, but 1.13.1, 17.03, 17.06, 17.09, 18.06 and 18.09 are known to work as well. +Version 19.03.8 is recommended, but 1.13.1, 17.03, 17.06, 17.09, 18.06 and 18.09 are known to work as well. Keep track of the latest verified Docker version in the Kubernetes release notes. Use the following commands to install Docker on your system: @@ -88,9 +88,9 @@ add-apt-repository \ ## Install Docker CE. apt-get update && apt-get install -y \ - containerd.io=1.2.10-3 \ - docker-ce=5:19.03.4~3-0~ubuntu-$(lsb_release -cs) \ - docker-ce-cli=5:19.03.4~3-0~ubuntu-$(lsb_release -cs) + containerd.io=1.2.13-1 \ + docker-ce=5:19.03.8~3-0~ubuntu-$(lsb_release -cs) \ + docker-ce-cli=5:19.03.8~3-0~ubuntu-$(lsb_release -cs) # Setup daemon. cat > /etc/docker/daemon.json <}} -{{% tab name="Debian or Ubuntu" %}} -```bash -# ensure legacy binaries are installed -sudo apt-get install -y iptables arptables ebtables - -# switch to legacy versions -sudo update-alternatives --set iptables /usr/sbin/iptables-legacy -sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy -sudo update-alternatives --set arptables /usr/sbin/arptables-legacy -sudo update-alternatives --set ebtables /usr/sbin/ebtables-legacy -``` -{{% /tab %}} -{{% tab name="Fedora" %}} -```bash -update-alternatives --set iptables /usr/sbin/iptables-legacy -``` -{{% /tab %}} -{{< /tabs >}} - ## Check required ports ### Control-plane node(s) diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md index c0c1e2b0b223d..a7ef2080522d7 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md @@ -22,6 +22,49 @@ If your problem is not listed below, please follow the following steps: {{% capture body %}} +## Not possible to join a v1.18 Node to a v1.17 cluster due to missing RBAC + +In v1.18 kubeadm added prevention for joining a Node in the cluster if a Node with the same name already exists. +This required adding RBAC for the bootstrap-token user to be able to GET a Node object. + +However this causes an issue where `kubeadm join` from v1.18 cannot join a cluster created by kubeadm v1.17. + +To workaround the issue you have two options: + +Execute `kubeadm init phase bootstrap-token` on a control-plane node using kubeadm v1.18. +Note that this enables the rest of the bootstrap-token permissions as well. + +or + +Apply the following RBAC manually using `kubectl apply -f ...`: + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: kubeadm:get-nodes +rules: +- apiGroups: + - "" + resources: + - nodes + verbs: + - get +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: kubeadm:get-nodes +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: kubeadm:get-nodes +subjects: +- apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:bootstrappers:kubeadm:default-node-token +``` + ## `ebtables` or some similar executable not found during installation If you see the following warnings while running `kubeadm init` diff --git a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index 6d079d0274c41..e8e23b8574774 100644 --- a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -100,7 +100,26 @@ Pods, Controllers and Services are critical elements to managing Windows workloa #### Container Runtime -Docker EE-basic 18.09 is required on Windows Server 2019 / 1809 nodes for Kubernetes. This works with the dockershim code included in the kubelet. Additional runtimes such as CRI-ContainerD may be supported in later Kubernetes versions. +##### Docker EE + +{{< feature-state for_k8s_version="v1.14" state="stable" >}} + +Docker EE-basic 18.09+ is the recommended container runtime for Windows Server 2019 / 1809 nodes running Kubernetes. This works with the dockershim code included in the kubelet. + +##### CRI-ContainerD + +{{< feature-state for_k8s_version="v1.18" state="alpha" >}} + +ContainerD is an OCI-compliant runtime that works with Kubernetes on Linux. Kubernetes v1.18 adds support for {{< glossary_tooltip term_id="containerd" text="ContainerD" >}} on Windows. Progress for ContainerD on Windows can be tracked at [enhancements#1001](https://github.com/kubernetes/enhancements/issues/1001). + +{{< caution >}} + +ContainerD on Windows in Kubernetes v1.18 has the following known shortcomings: + +* ContainerD does not have an official release with support for Windows; all development in Kubernetes has been performed against active ContainerD development branches. Production deployments should always use official releases that have been fully tested and are supported with security fixes. +* Group-Managed Service Accounts are not implemented when using ContainerD - see [containerd/cri#1276](https://github.com/containerd/cri/issues/1276). + +{{< /caution >}} #### Persistent Storage @@ -408,7 +427,6 @@ Your main source of help for troubleshooting your Kubernetes cluster should star # Register kubelet.exe # Microsoft releases the pause infrastructure container at mcr.microsoft.com/k8s/core/pause:1.2.0 - # For more info search for "pause" in the "Guide for adding Windows Nodes in Kubernetes" nssm install kubelet C:\k\kubelet.exe nssm set kubelet AppParameters --hostname-override= --v=6 --pod-infra-container-image=mcr.microsoft.com/k8s/core/pause:1.2.0 --resolv-conf="" --allow-privileged=true --enable-debugging-handlers --cluster-dns= --cluster-domain=cluster.local --kubeconfig=c:\k\config --hairpin-mode=promiscuous-bridge --image-pull-progress-deadline=20m --cgroups-per-qos=false --log-dir= --logtostderr=false --enforce-node-allocatable="" --network-plugin=cni --cni-bin-dir=c:\k\cni --cni-conf-dir=c:\k\cni\config nssm set kubelet AppDirectory C:\k @@ -520,7 +538,7 @@ Your main source of help for troubleshooting your Kubernetes cluster should star Check that your pause image is compatible with your OS version. The [instructions](https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/deploying-resources) assume that both the OS and the containers are version 1803. If you have a later version of Windows, such as an Insider build, you need to adjust the images accordingly. Please refer to the Microsoft's [Docker repository](https://hub.docker.com/u/microsoft/) for images. Regardless, both the pause image Dockerfile and the sample service expect the image to be tagged as :latest. - Starting with Kubernetes v1.14, Microsoft releases the pause infrastructure container at `mcr.microsoft.com/k8s/core/pause:1.2.0`. For more information search for "pause" in the [Guide for adding Windows Nodes in Kubernetes](../user-guide-windows-nodes). + Starting with Kubernetes v1.14, Microsoft releases the pause infrastructure container at `mcr.microsoft.com/k8s/core/pause:1.2.0`. 1. DNS resolution is not properly working @@ -534,6 +552,7 @@ Your main source of help for troubleshooting your Kubernetes cluster should star 1. My Kubernetes installation is failing because my Windows Server node is behind a proxy If you are behind a proxy, the following PowerShell environment variables must be defined: + ```PowerShell [Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://proxy.example.com:80/", [EnvironmentVariableTarget]::Machine) [Environment]::SetEnvironmentVariable("HTTPS_PROXY", "http://proxy.example.com:443/", [EnvironmentVariableTarget]::Machine) @@ -571,19 +590,15 @@ If filing a bug, please include detailed information about how to reproduce the We have a lot of features in our roadmap. An abbreviated high level list is included below, but we encourage you to view our [roadmap project](https://github.com/orgs/kubernetes/projects/8) and help us make Windows support better by [contributing](https://github.com/kubernetes/community/blob/master/sig-windows/). -### CRI-ContainerD - -{{< glossary_tooltip term_id="containerd" >}} is another OCI-compliant runtime that recently graduated as a {{< glossary_tooltip text="CNCF" term_id="cncf" >}} project. It's currently tested on Linux, but 1.3 will bring support for Windows and Hyper-V. [[reference](https://blog.docker.com/2019/02/containerd-graduates-within-the-cncf/)] +### Hyper-V isolation -The CRI-ContainerD interface will be able to manage sandboxes based on Hyper-V. This provides a foundation where RuntimeClass could be implemented for new use cases including: +Hyper-V isolation is requried to enable the following use cases for Windows containers in Kubernetes: * Hypervisor-based isolation between pods for additional security * Backwards compatibility allowing a node to run a newer Windows Server version without requiring containers to be rebuilt * Specific CPU/NUMA settings for a pod * Memory isolation and reservations -### Hyper-V isolation - The existing Hyper-V isolation support, an experimental feature as of v1.10, will be deprecated in the future in favor of the CRI-ContainerD and RuntimeClass features mentioned above. To use the current features and create a Hyper-V isolated container, the kubelet should be started with feature gates `HyperVContainer=true` and the Pod should include the annotation `experimental.windows.kubernetes.io/isolation-type=hyperv`. In the experiemental release, this feature is limited to 1 container per Pod. ```yaml @@ -612,7 +627,11 @@ spec: ### Deployment with kubeadm and cluster API -Kubeadm is becoming the de facto standard for users to deploy a Kubernetes cluster. Windows node support in kubeadm will come in a future release. We are also making investments in cluster API to ensure Windows nodes are properly provisioned. +Kubeadm is becoming the de facto standard for users to deploy a Kubernetes +cluster. Windows node support in kubeadm is currently a work-in-progress but a +guide is available [here](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/). +We are also making investments in cluster API to ensure Windows nodes are +properly provisioned. ### A few other key features * Beta support for Group Managed Service Accounts diff --git a/content/en/docs/setup/production-environment/windows/kubecluster.ps1-install.gif b/content/en/docs/setup/production-environment/windows/kubecluster.ps1-install.gif deleted file mode 100644 index e3d94b9b54ac2..0000000000000 Binary files a/content/en/docs/setup/production-environment/windows/kubecluster.ps1-install.gif and /dev/null differ diff --git a/content/en/docs/setup/production-environment/windows/kubecluster.ps1-join.gif b/content/en/docs/setup/production-environment/windows/kubecluster.ps1-join.gif deleted file mode 100644 index 828417d685c69..0000000000000 Binary files a/content/en/docs/setup/production-environment/windows/kubecluster.ps1-join.gif and /dev/null differ diff --git a/content/en/docs/setup/production-environment/windows/kubecluster.ps1-reset.gif b/content/en/docs/setup/production-environment/windows/kubecluster.ps1-reset.gif deleted file mode 100644 index e71d40d6dfb09..0000000000000 Binary files a/content/en/docs/setup/production-environment/windows/kubecluster.ps1-reset.gif and /dev/null differ diff --git a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md index a4f177b364ace..a79cc80b59347 100644 --- a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md +++ b/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md @@ -22,7 +22,7 @@ Windows applications constitute a large portion of the services and applications ## Before you begin -* Create a Kubernetes cluster that includes a [master and a worker node running Windows Server](../user-guide-windows-nodes) +* Create a Kubernetes cluster that includes a [master and a worker node running Windows Server](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes) * It is important to note that creating and deploying services and workloads on Kubernetes behaves in much the same way for Linux and Windows containers. [Kubectl commands](/docs/reference/kubectl/overview/) to interface with the cluster are identical. The example in the section below is provided simply to jumpstart your experience with Windows containers. ## Getting Started: Deploying a Windows container diff --git a/content/en/docs/setup/production-environment/windows/user-guide-windows-nodes.md b/content/en/docs/setup/production-environment/windows/user-guide-windows-nodes.md deleted file mode 100644 index 297ec97d79232..0000000000000 --- a/content/en/docs/setup/production-environment/windows/user-guide-windows-nodes.md +++ /dev/null @@ -1,356 +0,0 @@ ---- -reviewers: -- michmike -- patricklang -title: Guide for adding Windows Nodes in Kubernetes -min-kubernetes-server-version: v1.14 -content_template: templates/tutorial -weight: 70 ---- - -{{% capture overview %}} - -The Kubernetes platform can now be used to run both Linux and Windows containers. This page shows how one or more Windows nodes can be registered to a cluster. - -{{% /capture %}} - - -{{% capture prerequisites %}} - -* Obtain a [Windows Server 2019 license](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing) (or higher) in order to configure the Windows node that hosts Windows containers. You can use your organization's licenses for the cluster, or acquire one from Microsoft, a reseller, or via the major cloud providers such as GCP, AWS, and Azure by provisioning a virtual machine running Windows Server through their marketplaces. A [time-limited trial](https://www.microsoft.com/en-us/cloud-platform/windows-server-trial) is also available. - -* Build a Linux-based Kubernetes cluster in which you have access to the control-plane (some examples include [Creating a single control-plane cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/), [AKS Engine](/docs/setup/production-environment/turnkey/azure/), [GCE](/docs/setup/production-environment/turnkey/gce/), [AWS](/docs/setup/production-environment/turnkey/aws/). - -{{% /capture %}} - - -{{% capture objectives %}} - -* Register a Windows node to the cluster -* Configure networking so Pods and Services on Linux and Windows can communicate with each other - -{{% /capture %}} - - -{{% capture lessoncontent %}} - -## Getting Started: Adding a Windows Node to Your Cluster - -### Plan IP Addressing - -Kubernetes cluster management requires careful planning of your IP addresses so that you do not inadvertently cause network collision. This guide assumes that you are familiar with the [Kubernetes networking concepts](/docs/concepts/cluster-administration/networking/). - -In order to deploy your cluster you need the following address spaces: - -| Subnet / address range | Description | Default value | -| --- | --- | --- | -| Service Subnet | A non-routable, purely virtual subnet that is used by pods to uniformly access services without caring about the network topology. It is translated to/from routable address space by `kube-proxy` running on the nodes. | 10.96.0.0/12 | -| Cluster Subnet | This is a global subnet that is used by all pods in the cluster. Each node is assigned a smaller /24 subnet from this for their pods to use. It must be large enough to accommodate all pods used in your cluster. To calculate *minimumsubnet* size: `(number of nodes) + (number of nodes * maximum pods per node that you configure)`. Example: for a 5 node cluster for 100 pods per node: `(5) + (5 * 100) = 505.` | 10.244.0.0/16 | -| Kubernetes DNS Service IP | IP address of `kube-dns` service that is used for DNS resolution & cluster service discovery. | 10.96.0.10 | - -Review the networking options supported in 'Intro to Windows containers in Kubernetes: Supported Functionality: Networking' to determine how you need to allocate IP addresses for your cluster. - -### Components that run on Windows - -While the Kubernetes control-plane runs on your Linux node(s), the following components are configured and run on your Windows node(s). - -1. kubelet -2. kube-proxy -3. kubectl (optional) -4. Container runtime - -Get the latest binaries from [https://github.com/kubernetes/kubernetes/releases](https://github.com/kubernetes/kubernetes/releases), starting with v1.14 or later. The Windows-amd64 binaries for kubeadm, kubectl, kubelet, and kube-proxy can be found under the CHANGELOG link. - -### Networking Configuration - -Once you have a Linux-based Kubernetes control-plane ("Master") node you are ready to choose a networking solution. This guide illustrates using Flannel in VXLAN mode for simplicity. - -#### Configuring Flannel in VXLAN mode on the Linux control-plane - -1. Prepare Kubernetes master for Flannel - - Some minor preparation is recommended on the Kubernetes master in our cluster. It is recommended to enable bridged IPv4 traffic to iptables chains when using Flannel. This can be done using the following command: - - ```bash - sudo sysctl net.bridge.bridge-nf-call-iptables=1 - ``` - -1. Download & configure Flannel - - Download the most recent Flannel manifest: - - ```bash - wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml - ``` - - There are two sections you should modify to enable the vxlan networking backend: - - After applying the steps below, the `net-conf.json` section of `kube-flannel.yml` should look as follows: - - ```json - net-conf.json: | - { - "Network": "10.244.0.0/16", - "Backend": { - "Type": "vxlan", - "VNI" : 4096, - "Port": 4789 - } - } - ``` - - {{< note >}}The VNI must be set to 4096 and port 4789 for Flannel on Linux to interoperate with Flannel on Windows. Support for other VNIs is coming soon. See the [VXLAN documentation](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan) - for an explanation of these fields.{{< /note >}} - -1. In the `net-conf.json` section of your `kube-flannel.yml`, double-check: - 1. The cluster subnet (e.g. "10.244.0.0/16") is set as per your IP plan. - * VNI 4096 is set in the backend - * Port 4789 is set in the backend - 1. In the `cni-conf.json` section of your `kube-flannel.yml`, change the network name to `vxlan0`. - - Your `cni-conf.json` should look as follows: - - ```json - cni-conf.json: | - { - "name": "vxlan0", - "plugins": [ - { - "type": "flannel", - "delegate": { - "hairpinMode": true, - "isDefaultGateway": true - } - }, - { - "type": "portmap", - "capabilities": { - "portMappings": true - } - } - ] - } - ``` - -1. Apply the Flannel manifest and validate - - Let's apply the Flannel configuration: - - ```bash - kubectl apply -f kube-flannel.yml - ``` - - After a few minutes, you should see all the pods as running if the Flannel pod network was deployed. - - ```bash - kubectl get pods --all-namespaces - ``` - - The output looks like as follows: - - ``` - NAMESPACE NAME READY STATUS RESTARTS AGE - kube-system etcd-flannel-master 1/1 Running 0 1m - kube-system kube-apiserver-flannel-master 1/1 Running 0 1m - kube-system kube-controller-manager-flannel-master 1/1 Running 0 1m - kube-system kube-dns-86f4d74b45-hcx8x 3/3 Running 0 12m - kube-system kube-flannel-ds-54954 1/1 Running 0 1m - kube-system kube-proxy-Zjlxz 1/1 Running 0 1m - kube-system kube-scheduler-flannel-master 1/1 Running 0 1m - ``` - - Verify that the Flannel DaemonSet has the NodeSelector applied. - - ```bash - kubectl get ds -n kube-system - ``` - - The output looks like as follows. The NodeSelector `beta.kubernetes.io/os=linux` is applied. - - ``` - NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE - kube-flannel-ds 2 2 2 2 2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux 21d - kube-proxy 2 2 2 2 2 beta.kubernetes.io/os=linux 26d - ``` - - - -### Join Windows Worker Node - -In this section we'll cover configuring a Windows node from scratch to join a cluster on-prem. If your cluster is on a cloud you'll likely want to follow the cloud specific guides in the [public cloud providers section](#public-cloud-providers). - -#### Preparing a Windows Node - -{{< note >}} -All code snippets in Windows sections are to be run in a PowerShell environment with elevated permissions (Administrator) on the Windows worker node. -{{< /note >}} - -1. Download the [SIG Windows tools](https://github.com/kubernetes-sigs/sig-windows-tools) repository containing install and join scripts - ```PowerShell - [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 - Start-BitsTransfer https://github.com/kubernetes-sigs/sig-windows-tools/archive/master.zip - tar -xvf .\master.zip --strip-components 3 sig-windows-tools-master/kubeadm/v1.15.0/* - Remove-Item .\master.zip - ``` - -1. Customize the Kubernetes [configuration file](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/v1.15.0/Kubeclustervxlan.json) - - ``` - { - "Cri" : { // Contains values for container runtime and base container setup - "Name" : "dockerd", // Container runtime name - "Images" : { - "Pause" : "mcr.microsoft.com/k8s/core/pause:1.2.0", // Infrastructure container image - "Nanoserver" : "mcr.microsoft.com/windows/nanoserver:1809", // Base Nanoserver container image - "ServerCore" : "mcr.microsoft.com/windows/servercore:ltsc2019" // Base ServerCore container image - } - }, - "Cni" : { // Contains values for networking executables - "Name" : "flannel", // Name of network fabric - "Source" : [{ // Contains array of objects containing values for network daemon(s) - "Name" : "flanneld", // Name of network daemon - "Url" : "https://github.com/coreos/flannel/releases/download/v0.11.0/flanneld.exe" // Direct URL pointing to network daemon executable - } - ], - "Plugin" : { // Contains values for CNI network plugin - "Name": "vxlan" // Backend network mechanism to use: ["vxlan" | "bridge"] - }, - "InterfaceName" : "Ethernet" // Designated network interface name on Windows node to use as container network - }, - "Kubernetes" : { // Contains values for Kubernetes node binaries - "Source" : { // Contains values for Kubernetes node binaries - "Release" : "1.15.0", // Version of Kubernetes node binaries - "Url" : "https://dl.k8s.io/v1.15.0/kubernetes-node-windows-amd64.tar.gz" // Direct URL pointing to Kubernetes node binaries tarball - }, - "ControlPlane" : { // Contains values associated with Kubernetes control-plane ("Master") node - "IpAddress" : "kubemasterIP", // IP address of control-plane ("Master") node - "Username" : "localadmin", // Username on control-plane ("Master") node with remote SSH access - "KubeadmToken" : "token", // Kubeadm bootstrap token - "KubeadmCAHash" : "discovery-token-ca-cert-hash" // Kubeadm CA key hash - }, - "KubeProxy" : { // Contains values for Kubernetes network proxy configuration - "Gates" : "WinOverlay=true" // Comma-separated key-value pairs passed to kube-proxy feature gate flag - }, - "Network" : { // Contains values for IP ranges in CIDR notation for Kubernetes networking - "ServiceCidr" : "10.96.0.0/12", // Service IP subnet used by Services in CIDR notation - "ClusterCidr" : "10.244.0.0/16" // Cluster IP subnet used by Pods in CIDR notation - } - }, - "Install" : { // Contains values and configurations for Windows node installation - "Destination" : "C:\\ProgramData\\Kubernetes" // Absolute DOS path where Kubernetes will be installed on the Windows node - } -} - ``` - -{{< note >}} -Users can generate values for the `ControlPlane.KubeadmToken` and `ControlPlane.KubeadmCAHash` fields by running `kubeadm token create --print-join-command` on the Kubernetes control-plane ("Master") node. -{{< /note >}} - -1. Install containers and Kubernetes (requires a system reboot) - -Use the previously downloaded [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) script to install Kubernetes on the Windows Server container host: - - ```PowerShell - .\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -install - ``` - where `-ConfigFile` points to the path of the Kubernetes configuration file. - -{{< note >}} -In the example below, we are using overlay networking mode. This requires Windows Server version 2019 with [KB4489899](https://support.microsoft.com/help/4489899) and at least Kubernetes v1.14 or above. Users that cannot meet this requirement must use `L2bridge` networking instead by selecting `bridge` as the [plugin](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/v1.15.0/Kubeclusterbridge.json#L18) in the configuration file. -{{< /note >}} - - ![alt_text](../kubecluster.ps1-install.gif "KubeCluster.ps1 install output") - - -On the Windows node you target, this step will: - -1. Enable Windows Server containers role (and reboot) -1. Download and install the chosen container runtime -1. Download all needed container images -1. Download Kubernetes binaries and add them to the `$PATH` environment variable -1. Download CNI plugins based on the selection made in the Kubernetes Configuration file -1. (Optionally) Generate a new SSH key which is required to connect to the control-plane ("Master") node during joining - - {{< note >}}For the SSH key generation step, you also need to add the generated public SSH key to the `authorized_keys` file on your (Linux) control-plane node. You only need to do this once. The script prints out the steps you can follow to do this, at the end of its output.{{< /note >}} - -Once installation is complete, any of the generated configuration files or binaries can be modified before joining the Windows node. - -#### Join the Windows Node to the Kubernetes cluster -This section covers how to join a [Windows node with Kubernetes installed](#preparing-a-windows-node) with an existing (Linux) control-plane, to form a cluster. - -Use the previously downloaded [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) script to join the Windows node to the cluster: - - ```PowerShell - .\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -join - ``` - where `-ConfigFile` points to the path of the Kubernetes configuration file. - -![alt_text](../kubecluster.ps1-join.gif "KubeCluster.ps1 join output") - -{{< note >}} -Should the script fail during the bootstrap or joining procedure for whatever reason, start a new PowerShell session before starting each consecutive join attempt. -{{< /note >}} - -This step will perform the following actions: - -1. Connect to the control-plane ("Master") node via SSH, to retrieve the [Kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file. -1. Register kubelet as a Windows service -1. Configure CNI network plugins -1. Create an HNS network on top of the chosen network interface - {{< note >}} - This may cause a network blip for a few seconds while the vSwitch is being created. - {{< /note >}} -1. (If vxlan plugin is selected) Open up inbound firewall UDP port 4789 for overlay traffic -1. Register flanneld as a Windows service -1. Register kube-proxy as a Windows service - -Now you can view the Windows nodes in your cluster by running the following: - -```bash -kubectl get nodes -``` - -#### Remove the Windows Node from the Kubernetes cluster -In this section we'll cover how to remove a Windows node from a Kubernetes cluster. - -Use the previously downloaded [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) script to remove the Windows node from the cluster: - - ```PowerShell - .\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -reset - ``` - where `-ConfigFile` points to the path of the Kubernetes configuration file. - -![alt_text](../kubecluster.ps1-reset.gif "KubeCluster.ps1 reset output") - -This step will perform the following actions on the targeted Windows node: - -1. Delete the Windows node from the Kubernetes cluster -1. Stop all running containers -1. Remove all container networking (HNS) resources -1. Unregister all Kubernetes services (flanneld, kubelet, kube-proxy) -1. Delete all Kubernetes binaries (kube-proxy.exe, kubelet.exe, flanneld.exe, kubeadm.exe) -1. Delete all CNI network plugins binaries -1. Delete [Kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) used to access the Kubernetes cluster - - -### Public Cloud Providers - -#### Azure - -AKS-Engine can deploy a complete, customizable Kubernetes cluster with both Linux & Windows nodes. There is a step-by-step walkthrough available in the [docs on GitHub](https://github.com/Azure/aks-engine/blob/master/docs/topics/windows.md). - -#### GCP - -Users can easily deploy a complete Kubernetes cluster on GCE following this step-by-step walkthrough on [GitHub](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/windows/README-GCE-Windows-kube-up.md) - -#### Deployment with kubeadm and cluster API - -Kubeadm is becoming the de facto standard for users to deploy a Kubernetes cluster. Windows node support in kubeadm is an alpha feature since Kubernetes release v1.16. We are also making investments in cluster API to ensure Windows nodes are properly provisioned. For more details, please consult the [kubeadm for Windows KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/kubeadm/20190424-kubeadm-for-windows.md). - - -### Next Steps - -Now that you've configured a Windows worker in your cluster to run Windows containers you may want to add one or more Linux nodes as well to run Linux containers. You are now ready to schedule Windows containers on your cluster. - -{{% /capture %}} - diff --git a/content/en/docs/setup/release/notes.md b/content/en/docs/setup/release/notes.md index c1ad709781b91..a344a11fc0648 100644 --- a/content/en/docs/setup/release/notes.md +++ b/content/en/docs/setup/release/notes.md @@ -1,5 +1,5 @@ --- -title: v1.17 Release Notes +title: v1.18 Release Notes weight: 10 card: name: download @@ -13,731 +13,1360 @@ card: -# v1.17.0 +# v1.18.0 [Documentation](https://docs.k8s.io) -## Downloads for v1.17.0 +## Downloads for v1.18.0 -| filename | sha512 hash | -| ------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------- | -| [kubernetes.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes.tar.gz) | `68d5af15901281954de01164426cfb5ca31c14341387fad34d0cb9aa5f40c932ad44f0de4f987caf2be6bdcea2051e589d25878cf4f9ac0ee73048029a11825f` | -| [kubernetes-src.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-src.tar.gz) | `5424576d7f7936df15243fee0036e7936d2d6224e98ac805ce96cdf7b83a7c5b66dfffc8823d7bc0c17c700fa3c01841208e8cf89be91d237d12e18f3d2f307c` | +filename | sha512 hash +-------- | ----------- +[kubernetes.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes.tar.gz) | `cd5b86a3947a4f2cea6d857743ab2009be127d782b6f2eb4d37d88918a5e433ad2c7ba34221c34089ba5ba13701f58b657f0711401e51c86f4007cb78744dee7` +[kubernetes-src.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-src.tar.gz) | `fb42cf133355ef18f67c8c4bb555aa1f284906c06e21fa41646e086d34ece774e9d547773f201799c0c703ce48d4d0e62c6ba5b2a4d081e12a339a423e111e52` ### Client Binaries -| filename | sha512 hash | -| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------- | -| [kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-client-darwin-386.tar.gz) | `4c9a06409561b8ecc8901d0b88bc955ab8b8c99256b3f6066811539211cff5ba7fb9e3802ac2d8b00a14ce619fa82aeebe83eae9f4b0774bedabd3da0235b78b` | -| [kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-client-darwin-amd64.tar.gz) | `78ce6875c5f5a03bc057e7194fd1966beb621f825ba786d35a9921ab1ae33ed781d0f93a473a6b985da1ba4fbe95c15b23cdca9e439dfd653dbcf5a2b23d1a73` | -| [kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-client-linux-386.tar.gz) | `7a4bcd7d06d0f4ba929451f652c92a3c4d428f9b38ed83093f076bb25699b9c4e82f8f851ab981e68becbf10b148ddab4f7dce3743e84d642baa24c00312a2aa` | -| [kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-client-linux-amd64.tar.gz) | `7f9fc9ac07e9acbf12b58ae9077a8ce1f7fb4b5ceccd3856b55d2beb5e435d4fd27884c10ffdf3e2e18cafd4acc001ed5cf2a0a9a5b0545d9be570f63012d9c0` | -| [kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-client-linux-arm.tar.gz) | `8f74fff80a000cfaefa2409bdce6fd0d546008c7942a7178a4fa88a9b3ca05d10f34352e2ea2aec5297aa5c630c2b9701b507273c0ed0ddc0c297e57b655d62e` | -| [kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-client-linux-arm64.tar.gz) | `18d92b320f138f5080f98f1ffee20e405187549ab3aad55b7f60f02e3b7f5a44eb9826098576b42937fd0aac01fe6bcae36b5a8ee52ddde3571a1281b279c114` | -| [kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-client-linux-ppc64le.tar.gz) | `fd9b15a88b3d5a506a84ebfb56de291b85978b14f61a2c05f4bdb6a7e45a36f92af5a024a6178dbebd82a92574ec6d8cf9d8ac912f868f757649a2a8434011fe` | -| [kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-client-linux-s390x.tar.gz) | `ae3b284a78975cbfccaac04ea802085c31fd75cccf4ece3a983f44faf755dd94c43833e60f52c5ea57bc462cb24268ef4b7246876189113f588a012dd58e9630` | -| [kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-client-windows-386.tar.gz) | `4ba83b068e7f4a203bcc5cc8bb2c456a6a9c468e695f86f69d8f2ac81be9a1ce156f9a2f28286cb7eb0480faac397d964821c009473bdb443d84a30b6d020551` | -| [kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-client-windows-amd64.tar.gz) | `fc79b0e926a823c7d8b9010dee0c559587b7f97c9290b2126d517c4272891ce36e310a64c85f3861a1c951da8dc21f46244a59ff9d52b7b7a3f84879f533e6aa` | +filename | sha512 hash +-------- | ----------- +[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-client-darwin-386.tar.gz) | `26df342ef65745df12fa52931358e7f744111b6fe1e0bddb8c3c6598faf73af997c00c8f9c509efcd7cd7e82a0341a718c08fbd96044bfb58e80d997a6ebd3c2` +[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-client-darwin-amd64.tar.gz) | `803a0fed122ef6b85f7a120b5485723eaade765b7bc8306d0c0da03bd3df15d800699d15ea2270bb7797fa9ce6a81da90e730dc793ea4ed8c0149b63d26eca30` +[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-client-linux-386.tar.gz) | `110844511b70f9f3ebb92c15105e6680a05a562cd83f79ce2d2e25c2dd70f0dbd91cae34433f61364ae1ce4bd573b635f2f632d52de8f72b54acdbc95a15e3f0` +[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-client-linux-amd64.tar.gz) | `594ca3eadc7974ec4d9e4168453e36ca434812167ef8359086cd64d048df525b7bd46424e7cc9c41e65c72bda3117326ba1662d1c9d739567f10f5684fd85bee` +[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-client-linux-arm.tar.gz) | `d3627b763606557a6c9a5766c34198ec00b3a3cd72a55bc2cb47731060d31c4af93543fb53f53791062bb5ace2f15cbaa8592ac29009641e41bd656b0983a079` +[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-client-linux-arm64.tar.gz) | `ba9056eff1452cbdaef699efbf88f74f5309b3f7808d372ebf6918442d0c9fea1653c00b9db3b7626399a460eef9b1fa9e29b827b7784f34561cbc380554e2ea` +[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-client-linux-ppc64le.tar.gz) | `f80fb3769358cb20820ff1a1ce9994de5ed194aabe6c73fb8b8048bffc394d1b926de82c204f0e565d53ffe7562faa87778e97a3ccaaaf770034a992015e3a86` +[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-client-linux-s390x.tar.gz) | `a9b658108b6803d60fa3cd4e76d9e58bf75201017164fe54054b7ccadbb68c4ad7ba7800746940bc518d90475e6c0a96965a26fa50882f4f0e56df404f4ae586` +[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-client-windows-386.tar.gz) | `18adffab5d1be146906fd8531f4eae7153576aac235150ce2da05aee5ae161f6bd527e8dec34ae6131396cd4b3771e0d54ce770c065244ad3175a1afa63c89e1` +[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-client-windows-amd64.tar.gz) | `162396256429cef07154f817de2a6b67635c770311f414e38b1e2db25961443f05d7b8eb1f8da46dec8e31c5d1d2cd45f0c95dad1bc0e12a0a7278a62a0b9a6b` ### Server Binaries -| filename | sha512 hash | -| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------- | -| [kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-server-linux-amd64.tar.gz) | `28b2703c95894ab0565e372517c4a4b2c33d1be3d778fae384a6ab52c06cea7dd7ec80060dbdba17c8ab23bbedcde751cccee7657eba254f7d322cf7c4afc701` | -| [kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-server-linux-arm.tar.gz) | `b36a9f602131dba23f267145399aad0b19e97ab7b5194b2e3c01c57f678d7b0ea30c1ea6b4c15fd87b1fd3bf06abd4ec443bef5a3792c0d813356cdeb3b6a935` | -| [kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-server-linux-arm64.tar.gz) | `42adae077603f25b194e893f15e7f415011f25e173507a190bafbee0d0e86cdd6ee8f11f1bcf0a5366e845bd968f92e5bf66785f20c1125c801cf3ec9850d0bd` | -| [kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-server-linux-ppc64le.tar.gz) | `7e72d4255e661e946203c1c0c684cd0923034eb112c35e3ba08fbf9d1ef5e8bb291840c6ff99aea6180083846f9a9ba88387e176ee7a5def49e1d19366e2789f` | -| [kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-server-linux-s390x.tar.gz) | `00bc634654ec7d1ec2eca7a3e943ac287395503a06c8da22b7efb3a35435ceb323618c6d9931d6693bfb19f2b8467ae8f05f98392df8ee4954556c438409c8d4` | +filename | sha512 hash +-------- | ----------- +[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-server-linux-amd64.tar.gz) | `a92f8d201973d5dfa44a398e95fcf6a7b4feeb1ef879ab3fee1c54370e21f59f725f27a9c09ace8c42c96ac202e297fd458e486c489e05f127a5cade53b8d7c4` +[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-server-linux-arm.tar.gz) | `62fbff3256bc0a83f70244b09149a8d7870d19c2c4b6dee8ca2714fc7388da340876a0f540d2ae9bbd8b81fdedaf4b692c72d2840674db632ba2431d1df1a37d` +[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-server-linux-arm64.tar.gz) | `842910a7013f61a60d670079716b207705750d55a9e4f1f93696d19d39e191644488170ac94d8740f8e3aa3f7f28f61a4347f69d7e93d149c69ac0efcf3688fe` +[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-server-linux-ppc64le.tar.gz) | `95c5b952ac1c4127a5c3b519b664972ee1fb5e8e902551ce71c04e26ad44b39da727909e025614ac1158c258dc60f504b9a354c5ab7583c2ad769717b30b3836` +[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-server-linux-s390x.tar.gz) | `a46522d2119a0fd58074564c1fa95dd8a929a79006b82ba3c4245611da8d2db9fd785c482e1b61a9aa361c5c9a6d73387b0e15e6a7a3d84fffb3f65db3b9deeb` ### Node Binaries -| filename | sha512 hash | -| ------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------- | -| [kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-node-linux-amd64.tar.gz) | `49ef6a41c65b3f26a4f3ffe63b92c8096c26aa27a89d227d935bc06a497c97505ad8bc215b4c5d5ad3af6489c1366cd26ecc8e2781a83f46a91503678abba71b` | -| [kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-node-linux-arm.tar.gz) | `21a213fd572200998bdd71f5ebbb96576fc7a7e7cfb1469f028cc1a310bc2b5c0ce32660629beb166b88f54e6ebecb2022b2ed1fdb902a9b9d5acb193d76fa0f` | -| [kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-node-linux-arm64.tar.gz) | `3642ee5e7476080a44005db8e7282fdbe4e4f220622761b95951c2c15b3e10d7b70566bfb7a9a58574f3fc385d5aae80738d88195fa308a07f199cee70f912f4` | -| [kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-node-linux-ppc64le.tar.gz) | `99687088be50a794894911d43827b7e1125fbc86bfba799f77c096ddaa5b2341b31d009b8063a177e503ce2ce0dafbda1115216f8a5777f34e0e2d81f0114104` | -| [kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-node-linux-s390x.tar.gz) | `73b9bc356de43fbed7d3294be747b83e0aac47051d09f1df7be52c33be670b63c2ea35856a483ebc2f57e30a295352b77f1b1a6728afa10ec1f3338cafbdb2bb` | -| [kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.17.0/kubernetes-node-windows-amd64.tar.gz) | `2fbc80f928231f60a5a7e4f427953ef17244b3a8f6fdeebcbfceb05b0587b84933fa723898c64488d94b9ce180357d6d4ca1505ca3c3c7fb11067b7b3bf6361b` | +filename | sha512 hash +-------- | ----------- +[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-node-linux-amd64.tar.gz) | `f714f80feecb0756410f27efb4cf4a1b5232be0444fbecec9f25cb85a7ccccdcb5be588cddee935294f460046c0726b90f7acc52b20eeb0c46a7200cf10e351a` +[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-node-linux-arm.tar.gz) | `806000b5f6d723e24e2f12d19d1b9b3d16c74b855f51c7063284adf1fcc57a96554a3384f8c05a952c6f6b929a05ed12b69151b1e620c958f74c9600f3db0fcb` +[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-node-linux-arm64.tar.gz) | `c207e9ab60587d135897b5366af79efe9d2833f33401e469b2a4e0d74ecd2cf6bb7d1e5bc18d80737acbe37555707f63dd581ccc6304091c1d98dafdd30130b7` +[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-node-linux-ppc64le.tar.gz) | `a542ed5ed02722af44ef12d1602f363fcd4e93cf704da2ea5d99446382485679626835a40ae2ba47a4a26dce87089516faa54479a1cfdee2229e8e35aa1c17d7` +[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-node-linux-s390x.tar.gz) | `651e0db73ee67869b2ae93cb0574168e4bd7918290fc5662a6b12b708fa628282e3f64be2b816690f5a2d0f4ff8078570f8187e65dee499a876580a7a63d1d19` +[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0/kubernetes-node-windows-amd64.tar.gz) | `d726ed904f9f7fe7e8831df621dc9094b87e767410a129aa675ee08417b662ddec314e165f29ecb777110fbfec0dc2893962b6c71950897ba72baaa7eb6371ed` -# Changes +## Changelog since v1.17.0 -A complete changelog for the release notes is now hosted in a customizable format at [relnotes.k8s.io](https://relnotes.k8s.io). Check it out and please give us your feedback! +A complete changelog for the release notes is now hosted in a customizable +format at [https://relnotes.k8s.io][1]. Check it out and please give us your +feedback! + +[1]: https://relnotes.k8s.io/?releaseVersions=1.18.0 ## What’s New (Major Themes) -### Cloud Provider Labels reach General Availability +### Kubernetes Topology Manager Moves to Beta - Align Up! + +A beta feature of Kubernetes in release 1.18, the [Topology Manager feature](https://github.com/nolancon/website/blob/f4200307260ea3234540ef13ed80de325e1a7267/content/en/docs/tasks/administer-cluster/topology-manager.md) enables NUMA alignment of CPU and devices (such as SR-IOV VFs) that will allow your workload to run in an environment optimized for low-latency. Prior to the introduction of the Topology Manager, the CPU and Device Manager would make resource allocation decisions independent of each other. This could result in undesirable allocations on multi-socket systems, causing degraded performance on latency critical applications. + +### Serverside Apply - Beta 2 + +Server-side Apply was promoted to Beta in 1.16, but is now introducing a second Beta in 1.18. This new version will track and manage changes to fields of all new Kubernetes objects, allowing you to know what changed your resources and when. + +### Extending Ingress with and replacing a deprecated annotation with IngressClass + +In Kubernetes 1.18, there are two significant additions to Ingress: A new `pathType` field and a new `IngressClass` resource. The `pathType` field allows specifying how paths should be matched. In addition to the default `ImplementationSpecific` type, there are new `Exact` and `Prefix` path types. + +The `IngressClass` resource is used to describe a type of Ingress within a Kubernetes cluster. Ingresses can specify the class they are associated with by using a new `ingressClassName` field on Ingresses. This new resource and field replace the deprecated `kubernetes.io/ingress.class` annotation. + +### SIG CLI introduces kubectl debug + +SIG CLI was debating the need for a debug utility for quite some time already. With the development of [ephemeral containers](https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/), it became more obvious how we can support developers with tooling built on top of `kubectl exec`. The addition of the `kubectl debug` [command](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/20190805-kubectl-debug.md) (it is alpha but your feedback is more than welcome), allows developers to easily debug their Pods inside the cluster. We think this addition is invaluable. This command allows one to create a temporary container which runs next to the Pod one is trying to examine, but also attaches to the console for interactive troubleshooting. -Added as a beta feature way back in v1.2, v1.17 sees the general availability of cloud provider labels. +### Introducing Windows CSI support alpha for Kubernetes -### Volume Snapshot Moves to Beta +With the release of Kubernetes 1.18, an alpha version of CSI Proxy for Windows is getting released. CSI proxy enables non-privileged (pre-approved) containers to perform privileged storage operations on Windows. CSI drivers can now be supported in Windows by leveraging CSI proxy. +SIG Storage made a lot of progress in the 1.18 release. +In particular, the following storage features are moving to GA in Kubernetes 1.18: +- Raw Block Support: Allow volumes to be surfaced as block devices inside containers instead of just mounted filesystems. +- Volume Cloning: Duplicate a PersistentVolumeClaim and underlying storage volume using the Kubernetes API via CSI. +- CSIDriver Kubernetes API Object: Simplifies CSI driver discovery and allows CSI Drivers to customize Kubernetes behavior. -The Kubernetes Volume Snapshot feature is now beta in Kubernetes v1.17. It was introduced as alpha in Kubernetes v1.12, with a second alpha with breaking changes in Kubernetes v1.13. +SIG Storage is also introducing the following new storage features as alpha in Kubernetes 1.18: +- Windows CSI Support: Enabling containerized CSI node plugins in Windows via new [CSIProxy](https://github.com/kubernetes-csi/csi-proxy) +- Recursive Volume Ownership OnRootMismatch Option: Add a new “OnRootMismatch” policy that can help shorten the mount time for volumes that require ownership change and have many directories and files. -### CSI Migration Beta +### Other notable announcements -The Kubernetes in-tree storage plugin to Container Storage Interface (CSI) migration infrastructure is now beta in Kubernetes v1.17. CSI migration was introduced as alpha in Kubernetes v1.14. +SIG Network is moving IPv6 to Beta in Kubernetes 1.18, after incrementing significantly the test coverage with new CI jobs. + +NodeLocal DNSCache is an add-on that runs a dnsCache pod as a daemonset to improve clusterDNS performance and reliability. The feature has been in Alpha since 1.13 release. The SIG Network is announcing the GA graduation of Node Local DNSCache [#1351](https://github.com/kubernetes/enhancements/pull/1351) ## Known Issues -- volumeDevices mapping ignored when container is privileged -- The `Should recreate evicted statefulset` conformance [test](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/apps/statefulset.go) fails because `Pod ss-0 expected to be re-created at least once`. This was caused by the `Predicate PodFitsHostPorts failed` scheduling error. The root cause was a host port conflict for port `21017`. This port was in-use as an ephemeral port by another application running on the node. This will be looked at for the 1.18 release. -- client-go discovery clients constructed using `NewDiscoveryClientForConfig` or `NewDiscoveryClientForConfigOrDie` default to rate limits that cause normal discovery request patterns to take several seconds. This is fixed in https://issue.k8s.io/86168 and will be resolved in v1.17.1. As a workaround, the `Burst` value can be adjusted higher in the rest.Config passed into `NewDiscoveryClientForConfig` or `NewDiscoveryClientForConfigOrDie`. -- The IP allocator in v1.17.0 can return errors such as `the cluster IP for service is not within the service CIDR ; please recreate` in the logs of the kube-apiserver. The cause is incorrect CIDR calculations if the service CIDR (`--service-cluster-ip-range`) is set to bits lower than `/16`. This is fixed in http://issue.k8s.io/86534 and will be resolved in v1.17.1. +No Known Issues Reported ## Urgent Upgrade Notes ### (No, really, you MUST read this before you upgrade) -#### Cluster Lifecycle - -- Kubeadm: add a new `kubelet-finalize` phase as part of the `init` workflow and an experimental sub-phase to enable automatic kubelet client certificate rotation on primary control-plane nodes. - Prior to 1.17 and for existing nodes created by `kubeadm init` where kubelet client certificate rotation is desired, you must modify `/etc/kubernetes/kubelet.conf` to point to the PEM symlink for rotation: - `client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem` and `client-key: /var/lib/kubelet/pki/kubelet-client-current.pem`, replacing the embedded client certificate and key. ([#84118](https://github.com/kubernetes/kubernetes/pull/84118), [@neolit123](https://github.com/neolit123)) - -#### Network - -- EndpointSlices: If upgrading a cluster with EndpointSlices already enabled, any EndpointSlices that should be managed by the EndpointSlice controller should have a `http://endpointslice.kubernetes.io/managed-by` label set to `endpointslice-controller.k8s.io`. - -#### Scheduling - -- Kubeadm: when adding extra apiserver authorization-modes, the defaults `Node,RBAC` are no longer prepended in the resulting static Pod manifests and a full override is allowed. ([#82616](https://github.com/kubernetes/kubernetes/pull/82616), [@ghouscht](https://github.com/ghouscht)) - -#### Storage - -- A node that uses a CSI raw block volume needs to be drained before kubelet can be upgraded to 1.17. ([#74026](https://github.com/kubernetes/kubernetes/pull/74026), [@mkimuram](https://github.com/mkimuram)) - -#### Windows - -- The Windows containers RunAsUsername feature is now beta. -- Windows worker nodes in a Kubernetes cluster now support Windows Server version 1903 in addition to the existing support for Windows Server 2019 -- The RuntimeClass scheduler can now simplify steering Linux or Windows pods to appropriate nodes -- All Windows nodes now get the new label `node.kubernetes.io/windows-build` that reflects the Windows major, minor, and build number that are needed to match compatibility between Windows containers and Windows worker nodes. - -## Deprecations and Removals - -- `kubeadm.k8s.io/v1beta1` has been deprecated, you should update your config to use newer non-deprecated API versions. ([#83276](https://github.com/kubernetes/kubernetes/pull/83276), [@Klaven](https://github.com/Klaven)) -- The deprecated feature gates GCERegionalPersistentDisk, EnableAggregatedDiscoveryTimeout and PersistentLocalVolumes are now unconditionally enabled and can no longer be specified in component invocations. ([#82472](https://github.com/kubernetes/kubernetes/pull/82472), [@draveness](https://github.com/draveness)) -- Deprecate the default service IP CIDR. The previous default was `10.0.0.0/24` which will be removed in 6 months/2 releases. Cluster admins must specify their own desired value, by using `--service-cluster-ip-range` on kube-apiserver. ([#81668](https://github.com/kubernetes/kubernetes/pull/81668), [@darshanime](https://github.com/darshanime)) -- Remove deprecated "include-uninitialized" flag. ([#80337](https://github.com/kubernetes/kubernetes/pull/80337), [@draveness](https://github.com/draveness)) -- All resources within the `rbac.authorization.k8s.io/v1alpha1` and `rbac.authorization.k8s.io/v1beta1` API groups are deprecated in favor of `rbac.authorization.k8s.io/v1`, and will no longer be served in v1.20. ([#84758](https://github.com/kubernetes/kubernetes/pull/84758), [@liggitt](https://github.com/liggitt)) -- The certificate signer no longer accepts ca.key passwords via the `CFSSL_CA_PK_PASSWORD` environment variable. This capability was not prompted by user request, never advertised, and recommended against in the security audit. ([#84677](https://github.com/kubernetes/kubernetes/pull/84677), [@mikedanese](https://github.com/mikedanese)) -- Deprecate the instance type beta label (`beta.kubernetes.io/instance-type`) in favor of its GA equivalent: `node.kubernetes.io/instance-type` ([#82049](https://github.com/kubernetes/kubernetes/pull/82049), [@andrewsykim](https://github.com/andrewsykim)) -- The built-in system:csi-external-provisioner and system:csi-external-attacher cluster roles are removed as of 1.17 release ([#84282](https://github.com/kubernetes/kubernetes/pull/84282), [@tedyu](https://github.com/tedyu)) -- The in-tree GCE PD plugin `kubernetes.io/gce-pd` is now deprecated and will be removed in 1.21. Users that self-deploy Kubernetes on GCP should enable CSIMigration + CSIMigrationGCE features and install the GCE PD CSI Driver (https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) to avoid disruption to existing Pod and PVC objects at that time. Users should start using the GCE PD CSI CSI Driver directly for any new volumes. ([#85231](https://github.com/kubernetes/kubernetes/pull/85231), [@davidz627](https://github.com/davidz627)) -- The in-tree AWS EBS plugin `kubernetes.io/aws-ebs` is now deprecated and will be removed in 1.21. Users that self-deploy Kubernetes on AWS should enable CSIMigration + CSIMigrationAWS features and install the AWS EBS CSI Driver (https://github.com/kubernetes-sigs/aws-ebs-csi-driver) to avoid disruption to existing Pod and PVC objects at that time. Users should start using the AWS EBS CSI CSI Driver directly for any new volumes. ([#85237](https://github.com/kubernetes/kubernetes/pull/85237), [@leakingtapan](https://github.com/leakingtapan)) -- The CSINodeInfo feature gate is deprecated and will be removed in a future release. The storage.k8s.io/v1beta1 CSINode object is deprecated and will be removed in a future release. ([#83474](https://github.com/kubernetes/kubernetes/pull/83474), [@msau42](https://github.com/msau42)) -- Removed Alpha feature `MountContainers` ([#84365](https://github.com/kubernetes/kubernetes/pull/84365), [@codenrhoden](https://github.com/codenrhoden)) -- Removed plugin watching of the deprecated directory `{kubelet_root_dir}/plugins` and CSI V0 support in accordance with deprecation announcement in https://v1-13.docs.kubernetes.io/docs/setup/release/notes ([#84533](https://github.com/kubernetes/kubernetes/pull/84533), [@davidz627](https://github.com/davidz627)) -- kubeadm deprecates the use of the hyperkube image ([#85094](https://github.com/kubernetes/kubernetes/pull/85094), [@rosti](https://github.com/rosti)) - -## Metrics Changes - -### Added metrics - -- Add `scheduler_goroutines` metric to track number of kube-scheduler binding and prioritizing goroutines ([#83535](https://github.com/kubernetes/kubernetes/pull/83535), [@wgliang](https://github.com/wgliang)) -- Adding initial EndpointSlice metrics. ([#83257](https://github.com/kubernetes/kubernetes/pull/83257), [@robscott](https://github.com/robscott)) -- Adds a metric `apiserver_request_error_total` to kube-apiserver. This metric tallies the number of `request_errors` encountered by verb, group, version, resource, subresource, scope, component, and code. ([#83427](https://github.com/kubernetes/kubernetes/pull/83427), [@logicalhan](https://github.com/logicalhan)) -- A new `kubelet_preemptions` metric is reported from Kubelets to track the number of preemptions occurring over time, and which resource is triggering those preemptions. ([#84120](https://github.com/kubernetes/kubernetes/pull/84120), [@smarterclayton](https://github.com/smarterclayton)) -- Kube-apiserver: Added metrics `authentication_latency_seconds` that can be used to understand the latency of authentication. ([#82409](https://github.com/kubernetes/kubernetes/pull/82409), [@RainbowMango](https://github.com/RainbowMango)) -- Add `plugin_execution_duration_seconds` metric for scheduler framework plugins. ([#84522](https://github.com/kubernetes/kubernetes/pull/84522), [@liu-cong](https://github.com/liu-cong)) -- Add `permit_wait_duration_seconds` metric to the scheduler. ([#84011](https://github.com/kubernetes/kubernetes/pull/84011), [@liu-cong](https://github.com/liu-cong)) - -### Deprecated/changed metrics - -- etcd version monitor metrics are now marked as with the ALPHA stability level. ([#83283](https://github.com/kubernetes/kubernetes/pull/83283), [@RainbowMango](https://github.com/RainbowMango)) -- Change `pod_preemption_victims` metric from Gauge to Histogram. ([#83603](https://github.com/kubernetes/kubernetes/pull/83603), [@Tabrizian](https://github.com/Tabrizian)) -- Following metrics from kubelet are now marked as with the ALPHA stability level: - `kubelet_container_log_filesystem_used_bytes` - `kubelet_volume_stats_capacity_bytes` - `kubelet_volume_stats_available_bytes` - `kubelet_volume_stats_used_bytes` - `kubelet_volume_stats_inodes` - `kubelet_volume_stats_inodes_free` - `kubelet_volume_stats_inodes_used` - `plugin_manager_total_plugins` - `volume_manager_total_volumes` - ([#84907](https://github.com/kubernetes/kubernetes/pull/84907), [@RainbowMango](https://github.com/RainbowMango)) -- Deprecated metric `rest_client_request_latency_seconds` has been turned off. ([#83836](https://github.com/kubernetes/kubernetes/pull/83836), [@RainbowMango](https://github.com/RainbowMango)) -- Following metrics from kubelet are now marked as with the ALPHA stability level: - `node_cpu_usage_seconds_total` - `node_memory_working_set_bytes` - `container_cpu_usage_seconds_total` - `container_memory_working_set_bytes` - `scrape_error` - ([#84987](https://github.com/kubernetes/kubernetes/pull/84987), [@RainbowMango](https://github.com/RainbowMango)) -- Deprecated prometheus request meta-metrics have been removed - `http_request_duration_microseconds` `http_request_duration_microseconds_sum` `http_request_duration_microseconds_count` - `http_request_size_bytes` - `http_request_size_bytes_sum` - `http_request_size_bytes_count` - `http_requests_total, http_response_size_bytes` - `http_response_size_bytes_sum` - `http_response_size_bytes_count` - due to removal from the prometheus client library. Prometheus http request meta-metrics are now generated from [`promhttp.InstrumentMetricHandler`](https://godoc.org/github.com/prometheus/client_golang/prometheus/promhttp#InstrumentMetricHandler) instead. -- Following metrics from kube-controller-manager are now marked as with the ALPHA stability level: - `storage_count_attachable_volumes_in_use` - `attachdetach_controller_total_volumes` - `pv_collector_bound_pv_count` - `pv_collector_unbound_pv_count` - `pv_collector_bound_pvc_count` - `pv_collector_unbound_pvc_count` - ([#84896](https://github.com/kubernetes/kubernetes/pull/84896), [@RainbowMango](https://github.com/RainbowMango)) -- Following metrics have been turned off: - `apiserver_request_count` - `apiserver_request_latencies` - `apiserver_request_latencies_summary` - `apiserver_dropped_requests` - `etcd_request_latencies_summary` - `apiserver_storage_transformation_latencies_microseconds` - `apiserver_storage_data_key_generation_latencies_microseconds` - `apiserver_storage_transformation_failures_total` - ([#83837](https://github.com/kubernetes/kubernetes/pull/83837), [@RainbowMango](https://github.com/RainbowMango)) +#### kube-apiserver: +- in an `--encryption-provider-config` config file, an explicit `cacheSize: 0` parameter previously silently defaulted to caching 1000 keys. In Kubernetes 1.18, this now returns a config validation error. To disable caching, you can specify a negative cacheSize value in Kubernetes 1.18+. +- consumers of the 'certificatesigningrequests/approval' API must now have permission to 'approve' CSRs for the specific signer requested by the CSR. More information on the new signerName field and the required authorization can be found at https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests#authorization ([#88246](https://github.com/kubernetes/kubernetes/pull/88246), [@munnerz](https://github.com/munnerz)) [SIG API Machinery, Apps, Auth, CLI, Node and Testing] +- The following features are unconditionally enabled and the corresponding `--feature-gates` flags have been removed: `PodPriority`, `TaintNodesByCondition`, `ResourceQuotaScopeSelectors` and `ScheduleDaemonSetPods` ([#86210](https://github.com/kubernetes/kubernetes/pull/86210), [@draveness](https://github.com/draveness)) [SIG Apps and Scheduling] + +#### kubelet: +- `--enable-cadvisor-endpoints` is now disabled by default. If you need access to the cAdvisor v1 Json API please enable it explicitly in the kubelet command line. Please note that this flag was deprecated in 1.15 and will be removed in 1.19. ([#87440](https://github.com/kubernetes/kubernetes/pull/87440), [@dims](https://github.com/dims)) [SIG Instrumentation, Node and Testing] +- Promote CSIMigrationOpenStack to Beta (off by default since it requires installation of the OpenStack Cinder CSI Driver. The in-tree AWS OpenStack Cinder driver "kubernetes.io/cinder" was deprecated in 1.16 and will be removed in 1.20. Users should enable CSIMigration + CSIMigrationOpenStack features and install the OpenStack Cinder CSI Driver (https://github.com/kubernetes-sigs/cloud-provider-openstack) to avoid disruption to existing Pod and PVC objects at that time. Users should start using the OpenStack Cinder CSI Driver directly for any new volumes. ([#85637](https://github.com/kubernetes/kubernetes/pull/85637), [@dims](https://github.com/dims)) [SIG Cloud Provider] + +#### kubectl: +- `kubectl` and k8s.io/client-go no longer default to a server address of `http://localhost:8080`. If you own one of these legacy clusters, you are *strongly* encouraged to secure your server. If you cannot secure your server, you can set the `$KUBERNETES_MASTER` environment variable to `http://localhost:8080` to continue defaulting the server address. `kubectl` users can also set the server address using the `--server` flag, or in a kubeconfig file specified via `--kubeconfig` or `$KUBECONFIG`. ([#86173](https://github.com/kubernetes/kubernetes/pull/86173), [@soltysh](https://github.com/soltysh)) [SIG API Machinery, CLI and Testing] +- `kubectl run` has removed the previously deprecated generators, along with flags unrelated to creating pods. `kubectl run` now only creates pods. See specific `kubectl create` subcommands to create objects other than pods. +([#87077](https://github.com/kubernetes/kubernetes/pull/87077), [@soltysh](https://github.com/soltysh)) [SIG Architecture, CLI and Testing] +- The deprecated command `kubectl rolling-update` has been removed ([#88057](https://github.com/kubernetes/kubernetes/pull/88057), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG Architecture, CLI and Testing] + +#### client-go: +- Signatures on methods in generated clientsets, dynamic, metadata, and scale clients have been modified to accept `context.Context` as a first argument. Signatures of Create, Update, and Patch methods have been updated to accept CreateOptions, UpdateOptions and PatchOptions respectively. Signatures of Delete and DeleteCollection methods now accept DeleteOptions by value instead of by reference. Generated clientsets with the previous interface have been added in new "deprecated" packages to allow incremental migration to the new APIs. The deprecated packages will be removed in the 1.21 release. A tool is available at http://sigs.k8s.io/clientgofix to rewrite method invocations to the new signatures. + +- The following deprecated metrics are removed, please convert to the corresponding metrics: + - The following replacement metrics are available from v1.14.0: + - `rest_client_request_latency_seconds` -> `rest_client_request_duration_seconds` + - `scheduler_scheduling_latency_seconds` -> `scheduler_scheduling_duration_seconds ` + - `docker_operations` -> `docker_operations_total` + - `docker_operations_latency_microseconds` -> `docker_operations_duration_seconds` + - `docker_operations_errors` -> `docker_operations_errors_total` + - `docker_operations_timeout` -> `docker_operations_timeout_total` + - `network_plugin_operations_latency_microseconds` -> `network_plugin_operations_duration_seconds` + - `kubelet_pod_worker_latency_microseconds` -> `kubelet_pod_worker_duration_seconds` + - `kubelet_pod_start_latency_microseconds` -> `kubelet_pod_start_duration_seconds` + - `kubelet_cgroup_manager_latency_microseconds` -> `kubelet_cgroup_manager_duration_seconds` + - `kubelet_pod_worker_start_latency_microseconds` -> `kubelet_pod_worker_start_duration_seconds` + - `kubelet_pleg_relist_latency_microseconds` -> `kubelet_pleg_relist_duration_seconds` + - `kubelet_pleg_relist_interval_microseconds` -> `kubelet_pleg_relist_interval_seconds` + - `kubelet_eviction_stats_age_microseconds` -> `kubelet_eviction_stats_age_seconds` + - `kubelet_runtime_operations` -> `kubelet_runtime_operations_total` + - `kubelet_runtime_operations_latency_microseconds` -> `kubelet_runtime_operations_duration_seconds` + - `kubelet_runtime_operations_errors` -> `kubelet_runtime_operations_errors_total` + - `kubelet_device_plugin_registration_count` -> `kubelet_device_plugin_registration_total` + - `kubelet_device_plugin_alloc_latency_microseconds` -> `kubelet_device_plugin_alloc_duration_seconds` + - `scheduler_e2e_scheduling_latency_microseconds` -> `scheduler_e2e_scheduling_duration_seconds` + - `scheduler_scheduling_algorithm_latency_microseconds` -> `scheduler_scheduling_algorithm_duration_seconds` + - `scheduler_scheduling_algorithm_predicate_evaluation` -> `scheduler_scheduling_algorithm_predicate_evaluation_seconds` + - `scheduler_scheduling_algorithm_priority_evaluation` -> `scheduler_scheduling_algorithm_priority_evaluation_seconds` + - `scheduler_scheduling_algorithm_preemption_evaluation` -> `scheduler_scheduling_algorithm_preemption_evaluation_seconds` + - `scheduler_binding_latency_microseconds` -> `scheduler_binding_duration_seconds` + - `kubeproxy_sync_proxy_rules_latency_microseconds` -> `kubeproxy_sync_proxy_rules_duration_seconds` + - `apiserver_request_latencies` -> `apiserver_request_duration_seconds` + - `apiserver_dropped_requests` -> `apiserver_dropped_requests_total` + - `etcd_request_latencies_summary` -> `etcd_request_duration_seconds` + - `apiserver_storage_transformation_latencies_microseconds ` -> `apiserver_storage_transformation_duration_seconds` + - `apiserver_storage_data_key_generation_latencies_microseconds` -> `apiserver_storage_data_key_generation_duration_seconds` + - `apiserver_request_count` -> `apiserver_request_total` + - `apiserver_request_latencies_summary` + - The following replacement metrics are available from v1.15.0: + - `apiserver_storage_transformation_failures_total` -> `apiserver_storage_transformation_operations_total` ([#76496](https://github.com/kubernetes/kubernetes/pull/76496), [@danielqsj](https://github.com/danielqsj)) [SIG API Machinery, Cluster Lifecycle, Instrumentation, Network, Node and Scheduling] + +## Changes by Kind + +### Deprecation + +#### kube-apiserver: +- the following deprecated APIs can no longer be served: + - All resources under `apps/v1beta1` and `apps/v1beta2` - use `apps/v1` instead + - `daemonsets`, `deployments`, `replicasets` resources under `extensions/v1beta1` - use `apps/v1` instead + - `networkpolicies` resources under `extensions/v1beta1` - use `networking.k8s.io/v1` instead + - `podsecuritypolicies` resources under `extensions/v1beta1` - use `policy/v1beta1` instead ([#85903](https://github.com/kubernetes/kubernetes/pull/85903), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, Apps, Cluster Lifecycle, Instrumentation and Testing] + +#### kube-controller-manager: +- Azure service annotation service.beta.kubernetes.io/azure-load-balancer-disable-tcp-reset has been deprecated. Its support would be removed in a future release. ([#88462](https://github.com/kubernetes/kubernetes/pull/88462), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] + +#### kubelet: +- The StreamingProxyRedirects feature and `--redirect-container-streaming` flag are deprecated, and will be removed in a future release. The default behavior (proxy streaming requests through the kubelet) will be the only supported option. If you are setting `--redirect-container-streaming=true`, then you must migrate off this configuration. The flag will no longer be able to be enabled starting in v1.20. If you are not setting the flag, no action is necessary. ([#88290](https://github.com/kubernetes/kubernetes/pull/88290), [@tallclair](https://github.com/tallclair)) [SIG API Machinery and Node] +- resource metrics endpoint `/metrics/resource/v1alpha1` as well as all metrics under this endpoint have been deprecated. Please convert to the following metrics emitted by endpoint `/metrics/resource`: + - scrape_error --> scrape_error + - node_cpu_usage_seconds_total --> node_cpu_usage_seconds + - node_memory_working_set_bytes --> node_memory_working_set_bytes + - container_cpu_usage_seconds_total --> container_cpu_usage_seconds + - container_memory_working_set_bytes --> container_memory_working_set_bytes + - scrape_error --> scrape_error + ([#86282](https://github.com/kubernetes/kubernetes/pull/86282), [@RainbowMango](https://github.com/RainbowMango)) [SIG Node] +- In a future release, kubelet will no longer create the CSI NodePublishVolume target directory, in accordance with the CSI specification. CSI drivers may need to be updated accordingly to properly create and process the target path. ([#75535](https://github.com/kubernetes/kubernetes/issues/75535)) [SIG Storage] + +#### kube-proxy: +- `--healthz-port` and `--metrics-port` flags are deprecated, please use `--healthz-bind-address` and `--metrics-bind-address` instead ([#88512](https://github.com/kubernetes/kubernetes/pull/88512), [@SataQiu](https://github.com/SataQiu)) [SIG Network] +- a new `EndpointSliceProxying` feature gate has been added to control the use of EndpointSlices in kube-proxy. The EndpointSlice feature gate that used to control this behavior no longer affects kube-proxy. This feature has been disabled by default. ([#86137](https://github.com/kubernetes/kubernetes/pull/86137), [@robscott](https://github.com/robscott)) + +#### kubeadm: +- command line option "kubelet-version" for `kubeadm upgrade node` has been deprecated and will be removed in a future release. ([#87942](https://github.com/kubernetes/kubernetes/pull/87942), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle] +- deprecate the usage of the experimental flag '--use-api' under the 'kubeadm alpha certs renew' command. ([#88827](https://github.com/kubernetes/kubernetes/pull/88827), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle] +- kube-dns is deprecated and will not be supported in a future version ([#86574](https://github.com/kubernetes/kubernetes/pull/86574), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle] +- the `ClusterStatus` struct present in the kubeadm-config ConfigMap is deprecated and will be removed in a future version. It is going to be maintained by kubeadm until it gets removed. The same information can be found on `etcd` and `kube-apiserver` pod annotations, `kubeadm.kubernetes.io/etcd.advertise-client-urls` and `kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint` respectively. ([#87656](https://github.com/kubernetes/kubernetes/pull/87656), [@ereslibre](https://github.com/ereslibre)) [SIG Cluster Lifecycle] + +#### kubectl: +- the boolean and unset values for the --dry-run flag are deprecated and a value --dry-run=server|client|none will be required in a future version. ([#87580](https://github.com/kubernetes/kubernetes/pull/87580), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI] +- `kubectl apply --server-dry-run` is deprecated and replaced with --dry-run=server ([#87580](https://github.com/kubernetes/kubernetes/pull/87580), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI] + +#### add-ons: +- Remove cluster-monitoring addon ([#85512](https://github.com/kubernetes/kubernetes/pull/85512), [@serathius](https://github.com/serathius)) [SIG Cluster Lifecycle, Instrumentation, Scalability and Testing] + +#### kube-scheduler: +- The `scheduling_duration_seconds` summary metric is deprecated ([#86586](https://github.com/kubernetes/kubernetes/pull/86586), [@xiaoanyunfei](https://github.com/xiaoanyunfei)) [SIG Scheduling] +- The `scheduling_algorithm_predicate_evaluation_seconds` and + `scheduling_algorithm_priority_evaluation_seconds` metrics are deprecated, replaced by `framework_extension_point_duration_seconds[extension_point="Filter"]` and `framework_extension_point_duration_seconds[extension_point="Score"]`. ([#86584](https://github.com/kubernetes/kubernetes/pull/86584), [@xiaoanyunfei](https://github.com/xiaoanyunfei)) [SIG Scheduling] +- `AlwaysCheckAllPredicates` is deprecated in scheduler Policy API. ([#86369](https://github.com/kubernetes/kubernetes/pull/86369), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling] + +#### Other deprecations: +- The k8s.io/node-api component is no longer updated. Instead, use the RuntimeClass types located within k8s.io/api, and the generated clients located within k8s.io/client-go ([#87503](https://github.com/kubernetes/kubernetes/pull/87503), [@liggitt](https://github.com/liggitt)) [SIG Node and Release] +- Removed the 'client' label from apiserver_request_total. ([#87669](https://github.com/kubernetes/kubernetes/pull/87669), [@logicalhan](https://github.com/logicalhan)) [SIG API Machinery and Instrumentation] + +### API Change + +#### New API types/versions: +- A new IngressClass resource has been added to enable better Ingress configuration. ([#88509](https://github.com/kubernetes/kubernetes/pull/88509), [@robscott](https://github.com/robscott)) [SIG API Machinery, Apps, CLI, Network, Node and Testing] +- The CSIDriver API has graduated to storage.k8s.io/v1, and is now available for use. ([#84814](https://github.com/kubernetes/kubernetes/pull/84814), [@huffmanca](https://github.com/huffmanca)) [SIG Storage] + +#### New API fields: +- autoscaling/v2beta2 HorizontalPodAutoscaler added a `spec.behavior` field that allows scale behavior to be configured. Behaviors are specified separately for scaling up and down. In each direction a stabilization window can be specified as well as a list of policies and how to select amongst them. Policies can limit the absolute number of pods added or removed, or the percentage of pods added or removed. ([#74525](https://github.com/kubernetes/kubernetes/pull/74525), [@gliush](https://github.com/gliush)) [SIG API Machinery, Apps, Autoscaling and CLI] +- Ingress: + - `spec.ingressClassName` replaces the deprecated `kubernetes.io/ingress.class` annotation, and allows associating an Ingress object with a particular controller. + - path definitions added a `pathType` field to allow indicating how the specified path should be matched against incoming requests. Valid values are `Exact`, `Prefix`, and `ImplementationSpecific` ([#88587](https://github.com/kubernetes/kubernetes/pull/88587), [@cmluciano](https://github.com/cmluciano)) [SIG Apps, Cluster Lifecycle and Network] +- The alpha feature `AnyVolumeDataSource` enables PersistentVolumeClaim objects to use the spec.dataSource field to reference a custom type as a data source ([#88636](https://github.com/kubernetes/kubernetes/pull/88636), [@bswartz](https://github.com/bswartz)) [SIG Apps and Storage] +- The alpha feature `ConfigurableFSGroupPolicy` enables v1 Pods to specify a spec.securityContext.fsGroupChangePolicy policy to control how file permissions are applied to volumes mounted into the pod. ([#88488](https://github.com/kubernetes/kubernetes/pull/88488), [@gnufied](https://github.com/gnufied)) [SIG Storage] +- The alpha feature `ServiceAppProtocol` enables setting an `appProtocol` field in ServicePort and EndpointPort definitions. ([#88503](https://github.com/kubernetes/kubernetes/pull/88503), [@robscott](https://github.com/robscott)) [SIG Apps and Network] +- The alpha feature `ImmutableEphemeralVolumes` enables an `immutable` field in both Secret and ConfigMap objects to mark their contents as immutable. ([#86377](https://github.com/kubernetes/kubernetes/pull/86377), [@wojtek-t](https://github.com/wojtek-t)) [SIG Apps, CLI and Testing] + +#### Other API changes: +- The beta feature `ServerSideApply` enables tracking and managing changed fields for all new objects, which means there will be `managedFields` in `metadata` with the list of managers and their owned fields. +- The alpha feature `ServiceAccountIssuerDiscovery` enables publishing OIDC discovery information and service account token verification keys at `/.well-known/openid-configuration` and `/openid/v1/jwks` endpoints by API servers configured to issue service account tokens. ([#80724](https://github.com/kubernetes/kubernetes/pull/80724), [@cceckman](https://github.com/cceckman)) [SIG API Machinery, Auth, Cluster Lifecycle and Testing] +- CustomResourceDefinition schemas that use `x-kubernetes-list-map-keys` to specify properties that uniquely identify list items must make those properties required or have a default value, to ensure those properties are present for all list items. See https://kubernetes.io/docs/reference/using-api/api-concepts/#merge-strategy for details. ([#88076](https://github.com/kubernetes/kubernetes/pull/88076), [@eloyekunle](https://github.com/eloyekunle)) [SIG API Machinery and Testing] +- CustomResourceDefinition schemas that use `x-kubernetes-list-type: map` or `x-kubernetes-list-type: set` now enable validation that the list items in the corresponding custom resources are unique. ([#84920](https://github.com/kubernetes/kubernetes/pull/84920), [@sttts](https://github.com/sttts)) [SIG API Machinery] + +#### Configuration file changes: + +#### kube-apiserver: +- The `--egress-selector-config-file` configuration file now accepts an apiserver.k8s.io/v1beta1 EgressSelectorConfiguration configuration object, and has been updated to allow specifying HTTP or GRPC connections to the network proxy ([#87179](https://github.com/kubernetes/kubernetes/pull/87179), [@Jefftree](https://github.com/Jefftree)) [SIG API Machinery, Cloud Provider and Cluster Lifecycle] + +#### kube-scheduler: +- A kubescheduler.config.k8s.io/v1alpha2 configuration file version is now accepted, with support for multiple scheduling profiles ([#87628](https://github.com/kubernetes/kubernetes/pull/87628), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling] + - HardPodAffinityWeight moved from a top level ComponentConfig parameter to a PluginConfig parameter of InterPodAffinity Plugin in `kubescheduler.config.k8s.io/v1alpha2` ([#88002](https://github.com/kubernetes/kubernetes/pull/88002), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling and Testing] + - Kube-scheduler can run more than one scheduling profile. Given a pod, the profile is selected by using its `.spec.schedulerName`. ([#88285](https://github.com/kubernetes/kubernetes/pull/88285), [@alculquicondor](https://github.com/alculquicondor)) [SIG Apps, Scheduling and Testing] + - Scheduler Extenders can now be configured in the v1alpha2 component config ([#88768](https://github.com/kubernetes/kubernetes/pull/88768), [@damemi](https://github.com/damemi)) [SIG Release, Scheduling and Testing] + - The PostFilter of scheduler framework is renamed to PreScore in kubescheduler.config.k8s.io/v1alpha2. ([#87751](https://github.com/kubernetes/kubernetes/pull/87751), [@skilxn-go](https://github.com/skilxn-go)) [SIG Scheduling and Testing] + +#### kube-proxy: +- Added kube-proxy flags `--ipvs-tcp-timeout`, `--ipvs-tcpfin-timeout`, `--ipvs-udp-timeout` to configure IPVS connection timeouts. ([#85517](https://github.com/kubernetes/kubernetes/pull/85517), [@andrewsykim](https://github.com/andrewsykim)) [SIG Cluster Lifecycle and Network] +- Added optional `--detect-local-mode` flag to kube-proxy. Valid values are "ClusterCIDR" (default matching previous behavior) and "NodeCIDR" ([#87748](https://github.com/kubernetes/kubernetes/pull/87748), [@satyasm](https://github.com/satyasm)) [SIG Cluster Lifecycle, Network and Scheduling] +- Kube-controller-manager and kube-scheduler expose profiling by default to match the kube-apiserver. Use `--enable-profiling=false` to disable. ([#88663](https://github.com/kubernetes/kubernetes/pull/88663), [@deads2k](https://github.com/deads2k)) [SIG API Machinery, Cloud Provider and Scheduling] +- Kubelet pod resources API now provides the information about active pods only. ([#79409](https://github.com/kubernetes/kubernetes/pull/79409), [@takmatsu](https://github.com/takmatsu)) [SIG Node] +- New flag `--endpointslice-updates-batch-period` in kube-controller-manager can be used to reduce the number of endpointslice updates generated by pod changes. ([#88745](https://github.com/kubernetes/kubernetes/pull/88745), [@mborsz](https://github.com/mborsz)) [SIG API Machinery, Apps and Network] +- New flag `--show-hidden-metrics-for-version` in kube-proxy, kubelet, kube-controller-manager, and kube-scheduler can be used to show all hidden metrics that are deprecated in the previous minor release. ([#85279](https://github.com/kubernetes/kubernetes/pull/85279), [@RainbowMango](https://github.com/RainbowMango)) [SIG Cluster Lifecycle and Network] + +#### Features graduated to beta: + - StartupProbe ([#83437](https://github.com/kubernetes/kubernetes/pull/83437), [@matthyx](https://github.com/matthyx)) [SIG Node, Scalability and Testing] + +#### Features graduated to GA: + - VolumePVCDataSource ([#88686](https://github.com/kubernetes/kubernetes/pull/88686), [@j-griffith](https://github.com/j-griffith)) [SIG Storage] + - TaintBasedEvictions ([#87487](https://github.com/kubernetes/kubernetes/pull/87487), [@skilxn-go](https://github.com/skilxn-go)) [SIG API Machinery, Apps, Node, Scheduling and Testing] + - BlockVolume and CSIBlockVolume ([#88673](https://github.com/kubernetes/kubernetes/pull/88673), [@jsafrane](https://github.com/jsafrane)) [SIG Storage] + - Windows RunAsUserName ([#87790](https://github.com/kubernetes/kubernetes/pull/87790), [@marosset](https://github.com/marosset)) [SIG Apps and Windows] +- The following feature gates are removed, because the associated features were unconditionally enabled in previous releases: CustomResourceValidation, CustomResourceSubresources, CustomResourceWebhookConversion, CustomResourcePublishOpenAPI, CustomResourceDefaulting ([#87475](https://github.com/kubernetes/kubernetes/pull/87475), [@liggitt](https://github.com/liggitt)) [SIG API Machinery] + +### Feature + +- API request throttling (due to a high rate of requests) is now reported in client-go logs at log level 2. The messages are of the form:`Throttling request took 1.50705208s, request: GET:` The presence of these messages may indicate to the administrator the need to tune the cluster accordingly. ([#87740](https://github.com/kubernetes/kubernetes/pull/87740), [@jennybuckley](https://github.com/jennybuckley)) [SIG API Machinery] +- Add support for mount options to the FC volume plugin ([#87499](https://github.com/kubernetes/kubernetes/pull/87499), [@ejweber](https://github.com/ejweber)) [SIG Storage] +- Added a config-mode flag in azure auth module to enable getting AAD token without spn: prefix in audience claim. When it's not specified, the default behavior doesn't change. ([#87630](https://github.com/kubernetes/kubernetes/pull/87630), [@weinong](https://github.com/weinong)) [SIG API Machinery, Auth, CLI and Cloud Provider] +- Allow for configuration of CoreDNS replica count ([#85837](https://github.com/kubernetes/kubernetes/pull/85837), [@pickledrick](https://github.com/pickledrick)) [SIG Cluster Lifecycle] +- Allow user to specify resource using --filename flag when invoking kubectl exec ([#88460](https://github.com/kubernetes/kubernetes/pull/88460), [@soltysh](https://github.com/soltysh)) [SIG CLI and Testing] +- Apiserver added a new flag --goaway-chance which is the fraction of requests that will be closed gracefully(GOAWAY) to prevent HTTP/2 clients from getting stuck on a single apiserver. ([#88567](https://github.com/kubernetes/kubernetes/pull/88567), [@answer1991](https://github.com/answer1991)) [SIG API Machinery] +- Azure Cloud Provider now supports using Azure network resources (Virtual Network, Load Balancer, Public IP, Route Table, Network Security Group, etc.) in different AAD Tenant and Subscription than those for the Kubernetes cluster. To use the feature, please reference https://github.com/kubernetes-sigs/cloud-provider-azure/blob/master/docs/cloud-provider-config.md#host-network-resources-in-different-aad-tenant-and-subscription. ([#88384](https://github.com/kubernetes/kubernetes/pull/88384), [@bowen5](https://github.com/bowen5)) [SIG Cloud Provider] +- Azure VMSS/VMSSVM clients now suppress requests on throttling ([#86740](https://github.com/kubernetes/kubernetes/pull/86740), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Azure cloud provider cache TTL is configurable, list of the azure cloud provider is as following: + - "availabilitySetNodesCacheTTLInSeconds" + - "vmssCacheTTLInSeconds" + - "vmssVirtualMachinesCacheTTLInSeconds" + - "vmCacheTTLInSeconds" + - "loadBalancerCacheTTLInSeconds" + - "nsgCacheTTLInSeconds" + - "routeTableCacheTTLInSeconds" + ([#86266](https://github.com/kubernetes/kubernetes/pull/86266), [@zqingqing1](https://github.com/zqingqing1)) [SIG Cloud Provider] +- Azure global rate limit is switched to per-client. A set of new rate limit configure options are introduced, including routeRateLimit, SubnetsRateLimit, InterfaceRateLimit, RouteTableRateLimit, LoadBalancerRateLimit, PublicIPAddressRateLimit, SecurityGroupRateLimit, VirtualMachineRateLimit, StorageAccountRateLimit, DiskRateLimit, SnapshotRateLimit, VirtualMachineScaleSetRateLimit and VirtualMachineSizeRateLimit. The original rate limit options would be default values for those new client's rate limiter. ([#86515](https://github.com/kubernetes/kubernetes/pull/86515), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Azure network and VM clients now suppress requests on throttling ([#87122](https://github.com/kubernetes/kubernetes/pull/87122), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Azure storage clients now suppress requests on throttling ([#87306](https://github.com/kubernetes/kubernetes/pull/87306), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Azure: add support for single stack IPv6 ([#88448](https://github.com/kubernetes/kubernetes/pull/88448), [@aramase](https://github.com/aramase)) [SIG Cloud Provider] +- DefaultConstraints can be specified for PodTopologySpread Plugin in the scheduler’s ComponentConfig ([#88671](https://github.com/kubernetes/kubernetes/pull/88671), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling] +- DisableAvailabilitySetNodes is added to avoid VM list for VMSS clusters. It should only be used when vmType is "vmss" and all the nodes (including control plane nodes) are VMSS virtual machines. ([#87685](https://github.com/kubernetes/kubernetes/pull/87685), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Elasticsearch supports automatically setting the advertise address ([#85944](https://github.com/kubernetes/kubernetes/pull/85944), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle and Instrumentation] +- EndpointSlices will now be enabled by default. A new `EndpointSliceProxying` feature gate determines if kube-proxy will use EndpointSlices, this is disabled by default. ([#86137](https://github.com/kubernetes/kubernetes/pull/86137), [@robscott](https://github.com/robscott)) [SIG Network] +- Kube-proxy: Added dual-stack IPv4/IPv6 support to the iptables proxier. ([#82462](https://github.com/kubernetes/kubernetes/pull/82462), [@vllry](https://github.com/vllry)) [SIG Network] +- Kubeadm now supports automatic calculations of dual-stack node cidr masks to kube-controller-manager. ([#85609](https://github.com/kubernetes/kubernetes/pull/85609), [@Arvinderpal](https://github.com/Arvinderpal)) [SIG Cluster Lifecycle] +- Kubeadm: add a upgrade health check that deploys a Job ([#81319](https://github.com/kubernetes/kubernetes/pull/81319), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle] +- Kubeadm: add the experimental feature gate PublicKeysECDSA that can be used to create a + cluster with ECDSA certificates from "kubeadm init". Renewal of existing ECDSA certificates is also supported using "kubeadm alpha certs renew", but not switching between the RSA and ECDSA algorithms on the fly or during upgrades. ([#86953](https://github.com/kubernetes/kubernetes/pull/86953), [@rojkov](https://github.com/rojkov)) [SIG API Machinery, Auth and Cluster Lifecycle] +- Kubeadm: implemented structured output of 'kubeadm config images list' command in JSON, YAML, Go template and JsonPath formats ([#86810](https://github.com/kubernetes/kubernetes/pull/86810), [@bart0sh](https://github.com/bart0sh)) [SIG Cluster Lifecycle] +- Kubeadm: on kubeconfig certificate renewal, keep the embedded CA in sync with the one on disk ([#88052](https://github.com/kubernetes/kubernetes/pull/88052), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle] +- Kubeadm: reject a node joining the cluster if a node with the same name already exists ([#81056](https://github.com/kubernetes/kubernetes/pull/81056), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle] +- Kubeadm: support Windows specific kubelet flags in kubeadm-flags.env ([#88287](https://github.com/kubernetes/kubernetes/pull/88287), [@gab-satchi](https://github.com/gab-satchi)) [SIG Cluster Lifecycle and Windows] +- Kubeadm: support automatic retry after failing to pull image ([#86899](https://github.com/kubernetes/kubernetes/pull/86899), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle] +- Kubeadm: upgrade supports fallback to the nearest known etcd version if an unknown k8s version is passed ([#88373](https://github.com/kubernetes/kubernetes/pull/88373), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle] +- Kubectl/drain: add disable-eviction option.Force drain to use delete, even if eviction is supported. This will bypass checking PodDisruptionBudgets, and should be used with caution. ([#85571](https://github.com/kubernetes/kubernetes/pull/85571), [@michaelgugino](https://github.com/michaelgugino)) [SIG CLI] +- Kubectl/drain: add skip-wait-for-delete-timeout option. If a pod’s `DeletionTimestamp` is older than N seconds, skip waiting for the pod. Seconds must be greater than 0 to skip. ([#85577](https://github.com/kubernetes/kubernetes/pull/85577), [@michaelgugino](https://github.com/michaelgugino)) [SIG CLI] +- Option `preConfiguredBackendPoolLoadBalancerTypes` is added to azure cloud provider for the pre-configured load balancers, possible values: `""`, `"internal"`, `"external"`,`"all"` ([#86338](https://github.com/kubernetes/kubernetes/pull/86338), [@gossion](https://github.com/gossion)) [SIG Cloud Provider] +- PodTopologySpread plugin now excludes terminatingPods when making scheduling decisions. ([#87845](https://github.com/kubernetes/kubernetes/pull/87845), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling] +- Provider/azure: Network security groups can now be in a separate resource group. ([#87035](https://github.com/kubernetes/kubernetes/pull/87035), [@CecileRobertMichon](https://github.com/CecileRobertMichon)) [SIG Cloud Provider] +- SafeSysctlWhitelist: add net.ipv4.ping_group_range ([#85463](https://github.com/kubernetes/kubernetes/pull/85463), [@AkihiroSuda](https://github.com/AkihiroSuda)) [SIG Auth] +- Scheduler framework permit plugins now run at the end of the scheduling cycle, after reserve plugins. Waiting on permit will remain in the beginning of the binding cycle. ([#88199](https://github.com/kubernetes/kubernetes/pull/88199), [@mateuszlitwin](https://github.com/mateuszlitwin)) [SIG Scheduling] +- Scheduler: Add DefaultBinder plugin ([#87430](https://github.com/kubernetes/kubernetes/pull/87430), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling and Testing] +- Skip default spreading scoring plugin for pods that define TopologySpreadConstraints ([#87566](https://github.com/kubernetes/kubernetes/pull/87566), [@skilxn-go](https://github.com/skilxn-go)) [SIG Scheduling] +- The kubectl --dry-run flag now accepts the values 'client', 'server', and 'none', to support client-side and server-side dry-run strategies. The boolean and unset values for the --dry-run flag are deprecated and a value will be required in a future version. ([#87580](https://github.com/kubernetes/kubernetes/pull/87580), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI] +- Support server-side dry-run in kubectl with --dry-run=server for commands including apply, patch, create, run, annotate, label, set, autoscale, drain, rollout undo, and expose. ([#87714](https://github.com/kubernetes/kubernetes/pull/87714), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG API Machinery, CLI and Testing] +- Add --dry-run=server|client to kubectl delete, taint, replace ([#88292](https://github.com/kubernetes/kubernetes/pull/88292), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI and Testing] +- The feature PodTopologySpread (feature gate `EvenPodsSpread`) has been enabled by default in 1.18. ([#88105](https://github.com/kubernetes/kubernetes/pull/88105), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling and Testing] +- The kubelet and the default docker runtime now support running ephemeral containers in the Linux process namespace of a target container. Other container runtimes must implement support for this feature before it will be available for that runtime. ([#84731](https://github.com/kubernetes/kubernetes/pull/84731), [@verb](https://github.com/verb)) [SIG Node] +- The underlying format of the `CPUManager` state file has changed. Upgrades should be seamless, but any third-party tools that rely on reading the previous format need to be updated. ([#84462](https://github.com/kubernetes/kubernetes/pull/84462), [@klueska](https://github.com/klueska)) [SIG Node and Testing] +- Update CNI version to v0.8.5 ([#78819](https://github.com/kubernetes/kubernetes/pull/78819), [@justaugustus](https://github.com/justaugustus)) [SIG API Machinery, Cluster Lifecycle, Network, Release and Testing] +- Webhooks have alpha support for network proxy ([#85870](https://github.com/kubernetes/kubernetes/pull/85870), [@Jefftree](https://github.com/Jefftree)) [SIG API Machinery, Auth and Testing] +- When client certificate files are provided, reload files for new connections, and close connections when a certificate changes. ([#79083](https://github.com/kubernetes/kubernetes/pull/79083), [@jackkleeman](https://github.com/jackkleeman)) [SIG API Machinery, Auth, Node and Testing] +- When deleting objects using kubectl with the --force flag, you are no longer required to also specify --grace-period=0. ([#87776](https://github.com/kubernetes/kubernetes/pull/87776), [@brianpursley](https://github.com/brianpursley)) [SIG CLI] +- Windows nodes on GCE can use virtual TPM-based authentication to the control plane. ([#85466](https://github.com/kubernetes/kubernetes/pull/85466), [@pjh](https://github.com/pjh)) [SIG Cluster Lifecycle] +- You can now pass "--node-ip ::" to kubelet to indicate that it should autodetect an IPv6 address to use as the node's primary address. ([#85850](https://github.com/kubernetes/kubernetes/pull/85850), [@danwinship](https://github.com/danwinship)) [SIG Cloud Provider, Network and Node] +- `kubectl` now contains a `kubectl alpha debug` command. This command allows attaching an ephemeral container to a running pod for the purposes of debugging. ([#88004](https://github.com/kubernetes/kubernetes/pull/88004), [@verb](https://github.com/verb)) [SIG CLI] +- TLS Server Name overrides can now be specified in a kubeconfig file and via --tls-server-name in kubectl ([#88769](https://github.com/kubernetes/kubernetes/pull/88769), [@deads2k](https://github.com/deads2k)) [SIG API Machinery, Auth and CLI] + +#### Metrics: +- Add `rest_client_rate_limiter_duration_seconds` metric to component-base to track client side rate limiter latency in seconds. Broken down by verb and URL. ([#88134](https://github.com/kubernetes/kubernetes/pull/88134), [@jennybuckley](https://github.com/jennybuckley)) [SIG API Machinery, Cluster Lifecycle and Instrumentation] +- Added two client certificate metrics for exec auth: + - `rest_client_certificate_expiration_seconds` a gauge reporting the lifetime of the current client certificate. Reports the time of expiry in seconds since January 1, 1970 UTC. + - `rest_client_certificate_rotation_age` a histogram reporting the age of a just rotated client certificate in seconds. ([#84382](https://github.com/kubernetes/kubernetes/pull/84382), [@sambdavidson](https://github.com/sambdavidson)) [SIG API Machinery, Auth, Cluster Lifecycle and Instrumentation] +- Controller manager serve workqueue metrics ([#87967](https://github.com/kubernetes/kubernetes/pull/87967), [@zhan849](https://github.com/zhan849)) [SIG API Machinery] - Following metrics have been turned off: - `scheduler_scheduling_latency_seconds` - `scheduler_e2e_scheduling_latency_microseconds` - `scheduler_scheduling_algorithm_latency_microseconds` - `scheduler_scheduling_algorithm_predicate_evaluation` - `scheduler_scheduling_algorithm_priority_evaluation` - `scheduler_scheduling_algorithm_preemption_evaluation` - `scheduler_scheduling_binding_latency_microseconds ([#83838](https://github.com/kubernetes/kubernetes/pull/83838`), [@RainbowMango](https://github.com/RainbowMango)) -- Deprecated metric `kubeproxy_sync_proxy_rules_latency_microseconds` has been turned off. ([#83839](https://github.com/kubernetes/kubernetes/pull/83839), [@RainbowMango](https://github.com/RainbowMango)) - -## Notable Features - -### Stable - -- Graduate ScheduleDaemonSetPods to GA. (feature gate will be removed in 1.18) ([#82795](https://github.com/kubernetes/kubernetes/pull/82795), [@draveness](https://github.com/draveness)) -- Graduate TaintNodesByCondition to GA in 1.17. (feature gate will be removed in 1.18) ([#82703](https://github.com/kubernetes/kubernetes/pull/82703), [@draveness](https://github.com/draveness)) -- The WatchBookmark feature is promoted to GA. With WatchBookmark feature, clients are able to request watch events with BOOKMARK type. See https://kubernetes.io/docs/reference/using-api/api-concepts/#watch-bookmarks for more details. ([#83195](https://github.com/kubernetes/kubernetes/pull/83195), [@wojtek-t](https://github.com/wojtek-t)) -- Promote NodeLease feature to GA. - The feature make Lease object changes an additional healthiness signal from Node. Together with that, we reduce frequency of NodeStatus updates to 5m by default in case of no changes to status itself ([#84351](https://github.com/kubernetes/kubernetes/pull/84351), [@wojtek-t](https://github.com/wojtek-t)) -- CSI Topology feature is GA. ([#83474](https://github.com/kubernetes/kubernetes/pull/83474), [@msau42](https://github.com/msau42)) -- The VolumeSubpathEnvExpansion feature is graduating to GA. The `VolumeSubpathEnvExpansion` feature gate is unconditionally enabled, and will be removed in v1.19. ([#82578](https://github.com/kubernetes/kubernetes/pull/82578), [@kevtaylor](https://github.com/kevtaylor)) -- Node-specific volume limits has graduated to GA. ([#83568](https://github.com/kubernetes/kubernetes/pull/83568), [@bertinatto](https://github.com/bertinatto)) -- The ResourceQuotaScopeSelectors feature has graduated to GA. The `ResourceQuotaScopeSelectors` feature gate is now unconditionally enabled and will be removed in 1.18. ([#82690](https://github.com/kubernetes/kubernetes/pull/82690), [@draveness](https://github.com/draveness)) - -### Beta - -- The Kubernetes Volume Snapshot feature has been moved to beta. The VolumeSnapshotDataSource feature gate is on by default in this release. This feature enables you to take a snapshot of a volume (if supported by the CSI driver), and use the snapshot to provision a new volume, pre-populated with data from the snapshot. -- Feature gates CSIMigration to Beta (on by default) and CSIMigrationGCE to Beta (off by default since it requires installation of the GCE PD CSI Driver) ([#85231](https://github.com/kubernetes/kubernetes/pull/85231), [@davidz627](https://github.com/davidz627)) -- EndpointSlices are now beta but not yet enabled by default. Use the EndpointSlice feature gate to enable this feature. ([#85365](https://github.com/kubernetes/kubernetes/pull/85365), [@robscott](https://github.com/robscott)) -- Promote CSIMigrationAWS to Beta (off by default since it requires installation of the AWS EBS CSI Driver) ([#85237](https://github.com/kubernetes/kubernetes/pull/85237), [@leakingtapan](https://github.com/leakingtapan)) -- Moving Windows RunAsUserName feature to beta ([#84882](https://github.com/kubernetes/kubernetes/pull/84882), [@marosset](https://github.com/marosset)) - -### CLI Improvements - -- The kubectl's api-resource command now has a `--sort-by` flag to sort resources by name or kind. ([#81971](https://github.com/kubernetes/kubernetes/pull/81971), [@laddng](https://github.com/laddng)) -- A new `--prefix` flag added into kubectl logs which prepends each log line with information about it's source (pod name and container name) ([#76471](https://github.com/kubernetes/kubernetes/pull/76471), [@m1kola](https://github.com/m1kola)) - -## API Changes - -- CustomResourceDefinitions now validate documented API semantics of `x-kubernetes-list-type` and `x-kubernetes-map-type` atomic to reject non-atomic sub-types. ([#84722](https://github.com/kubernetes/kubernetes/pull/84722), [@sttts](https://github.com/sttts)) -- Kube-apiserver: The `AdmissionConfiguration` type accepted by `--admission-control-config-file` has been promoted to `apiserver.config.k8s.io/v1` with no schema changes. ([#85098](https://github.com/kubernetes/kubernetes/pull/85098), [@liggitt](https://github.com/liggitt)) -- Fixed EndpointSlice port name validation to match Endpoint port name validation (allowing port names longer than 15 characters) ([#84481](https://github.com/kubernetes/kubernetes/pull/84481), [@robscott](https://github.com/robscott)) -- CustomResourceDefinitions introduce `x-kubernetes-map-type` annotation as a CRD API extension. Enables this particular validation for server-side apply. ([#84113](https://github.com/kubernetes/kubernetes/pull/84113), [@enxebre](https://github.com/enxebre)) - -## Other notable changes - -### API Machinery - -- kube-apiserver: the `--runtime-config` flag now supports an `api/beta=false` value which disables all built-in REST API versions matching `v[0-9]+beta[0-9]+`. ([#84304](https://github.com/kubernetes/kubernetes/pull/84304), [@liggitt](https://github.com/liggitt)) - The `--feature-gates` flag now supports an `AllBeta=false` value which disables all beta feature gates. ([#84304](https://github.com/kubernetes/kubernetes/pull/84304), [@liggitt](https://github.com/liggitt)) -- New flag `--show-hidden-metrics-for-version` in kube-apiserver can be used to show all hidden metrics that deprecated in the previous minor release. ([#84292](https://github.com/kubernetes/kubernetes/pull/84292), [@RainbowMango](https://github.com/RainbowMango)) -- kube-apiserver: Authentication configuration for mutating and validating admission webhooks referenced from an `--admission-control-config-file` can now be specified with `apiVersion: apiserver.config.k8s.io/v1, kind: WebhookAdmissionConfiguration`. ([#85138](https://github.com/kubernetes/kubernetes/pull/85138), [@liggitt](https://github.com/liggitt)) -- kube-apiserver: The `ResourceQuota` admission plugin configuration referenced from `--admission-control-config-file` admission config has been promoted to `apiVersion: apiserver.config.k8s.io/v1`, `kind: ResourceQuotaConfiguration` with no schema changes. ([#85099](https://github.com/kubernetes/kubernetes/pull/85099), [@liggitt](https://github.com/liggitt)) -- kube-apiserver: fixed a bug that could cause a goroutine leak if the apiserver encountered an encoding error serving a watch to a websocket watcher ([#84693](https://github.com/kubernetes/kubernetes/pull/84693), [@tedyu](https://github.com/tedyu)) -- Fix the bug that EndpointSlice for masters wasn't created after enabling EndpointSlice feature on a pre-existing cluster. ([#84421](https://github.com/kubernetes/kubernetes/pull/84421), [@tnqn](https://github.com/tnqn)) -- Switched intstr.Type to sized integer to follow API guidelines and improve compatibility with proto libraries ([#83956](https://github.com/kubernetes/kubernetes/pull/83956), [@liggitt](https://github.com/liggitt)) -- Client-go: improved allocation behavior of the delaying workqueue when handling objects with far-future ready times. ([#83945](https://github.com/kubernetes/kubernetes/pull/83945), [@barkbay](https://github.com/barkbay)) -- Fixed an issue with informers missing an `Added` event if a recently deleted object was immediately recreated at the same time the informer dropped a watch and relisted. ([#83911](https://github.com/kubernetes/kubernetes/pull/83911), [@matte21](https://github.com/matte21)) -- Fixed panic when accessing CustomResources of a CRD with `x-kubernetes-int-or-string`. ([#83787](https://github.com/kubernetes/kubernetes/pull/83787), [@sttts](https://github.com/sttts)) -- The resource version option, when passed to a list call, is now consistently interpreted as the minimum allowed resource version. Previously when listing resources that had the watch cache disabled clients could retrieve a snapshot at that exact resource version. If the client requests a resource version newer than the current state, a TimeoutError is returned suggesting the client retry in a few seconds. This behavior is now consistent for both single item retrieval and list calls, and for when the watch cache is enabled or disabled. ([#72170](https://github.com/kubernetes/kubernetes/pull/72170), [@jpbetz](https://github.com/jpbetz)) -- Fixes a goroutine leak in kube-apiserver when a request times out. ([#83333](https://github.com/kubernetes/kubernetes/pull/83333), [@lavalamp](https://github.com/lavalamp)) -- Fixes the bug in informer-gen that it produces incorrect code if a type has nonNamespaced tag set. ([#80458](https://github.com/kubernetes/kubernetes/pull/80458), [@tatsuhiro-t](https://github.com/tatsuhiro-t)) -- Resolves bottleneck in internal API server communication that can cause increased goroutines and degrade API Server performance ([#80465](https://github.com/kubernetes/kubernetes/pull/80465), [@answer1991](https://github.com/answer1991)) -- Resolves regression generating informers for packages whose names contain `.` characters ([#82410](https://github.com/kubernetes/kubernetes/pull/82410), [@nikhita](https://github.com/nikhita)) -- Resolves issue with `/readyz` and `/livez` not including etcd and kms health checks ([#82713](https://github.com/kubernetes/kubernetes/pull/82713), [@logicalhan](https://github.com/logicalhan)) -- Fixes regression in logging spurious stack traces when proxied connections are closed by the backend ([#82588](https://github.com/kubernetes/kubernetes/pull/82588), [@liggitt](https://github.com/liggitt)) -- Kube-apiserver now reloads serving certificates from disk every minute to allow rotation without restarting the server process ([#84200](https://github.com/kubernetes/kubernetes/pull/84200), [@jackkleeman](https://github.com/jackkleeman)) -- Client-ca bundles for the all generic-apiserver based servers will dynamically reload from disk on content changes ([#83579](https://github.com/kubernetes/kubernetes/pull/83579), [@deads2k](https://github.com/deads2k)) -- Client-go: Clients can request protobuf and json and correctly negotiate with the server for JSON for CRD objects, allowing all client libraries to request protobuf if it is available. If an error occurs negotiating a watch with the server, the error is immediately return by the client `Watch()` method instead of being sent as an `Error` event on the watch stream. ([#84692](https://github.com/kubernetes/kubernetes/pull/84692), [@smarterclayton](https://github.com/smarterclayton)) - Renamed FeatureGate RequestManagement to APIPriorityAndFairness. This feature gate is an alpha and has not yet been associated with any actual functionality. ([#85260](https://github.com/kubernetes/kubernetes/pull/85260), [@MikeSpreitzer](https://github.com/MikeSpreitzer)) -- Filter published OpenAPI schema by making nullable, required fields non-required in order to avoid kubectl to wrongly reject null values. ([#85722](https://github.com/kubernetes/kubernetes/pull/85722), [@sttts](https://github.com/sttts)) -- kube-apiserver: fixed a conflict error encountered attempting to delete a pod with `gracePeriodSeconds=0` and a resourceVersion precondition ([#85516](https://github.com/kubernetes/kubernetes/pull/85516), [@michaelgugino](https://github.com/michaelgugino)) -- Use context to check client closed instead of http.CloseNotifier in processing watch request which will reduce 1 goroutine for each request if proto is HTTP/2.x . ([#85408](https://github.com/kubernetes/kubernetes/pull/85408), [@answer1991](https://github.com/answer1991)) -- Reload apiserver SNI certificates from disk every minute ([#84303](https://github.com/kubernetes/kubernetes/pull/84303), [@jackkleeman](https://github.com/jackkleeman)) -- The mutating and validating admission webhook plugins now read configuration from the admissionregistration.k8s.io/v1 API. ([#80883](https://github.com/kubernetes/kubernetes/pull/80883), [@liggitt](https://github.com/liggitt)) -- kube-proxy: a configuration file specified via `--config` is now loaded with strict deserialization, which fails if the config file contains duplicate or unknown fields. This protects against accidentally running with config files that are malformed, mis-indented, or have typos in field names, and getting unexpected behavior. ([#82927](https://github.com/kubernetes/kubernetes/pull/82927), [@obitech](https://github.com/obitech)) -- When registering with a 1.17+ API server, MutatingWebhookConfiguration and ValidatingWebhookConfiguration objects can now request that only `v1` AdmissionReview requests be sent to them. Previously, webhooks were required to support receiving `v1beta1` AdmissionReview requests as well for compatibility with API servers <= 1.15. - - When registering with a 1.17+ API server, a CustomResourceDefinition conversion webhook can now request that only `v1` ConversionReview requests be sent to them. Previously, conversion webhooks were required to support receiving `v1beta1` ConversionReview requests as well for compatibility with API servers <= 1.15. ([#82707](https://github.com/kubernetes/kubernetes/pull/82707), [@liggitt](https://github.com/liggitt)) -- OpenAPI v3 format in CustomResourceDefinition schemas are now documented. ([#85381](https://github.com/kubernetes/kubernetes/pull/85381), [@sttts](https://github.com/sttts)) -- kube-apiserver: Fixed a regression accepting patch requests > 1MB ([#84963](https://github.com/kubernetes/kubernetes/pull/84963), [@liggitt](https://github.com/liggitt)) -- The example API server has renamed its `wardle.k8s.io` API group to `wardle.example.com` ([#81670](https://github.com/kubernetes/kubernetes/pull/81670), [@liggitt](https://github.com/liggitt)) -- CRDs defaulting is promoted to GA. Note: the feature gate CustomResourceDefaulting will be removed in 1.18. ([#84713](https://github.com/kubernetes/kubernetes/pull/84713), [@sttts](https://github.com/sttts)) -- Restores compatibility with <=1.15.x custom resources by not publishing OpenAPI for non-structural custom resource definitions ([#82653](https://github.com/kubernetes/kubernetes/pull/82653), [@liggitt](https://github.com/liggitt)) -- If given an IPv6 bind-address, kube-apiserver will now advertise an IPv6 endpoint for the kubernetes.default service. ([#84727](https://github.com/kubernetes/kubernetes/pull/84727), [@danwinship](https://github.com/danwinship)) -- Add table convertor to component status. ([#85174](https://github.com/kubernetes/kubernetes/pull/85174), [@zhouya0](https://github.com/zhouya0)) -- Scale custom resource unconditionally if resourceVersion is not provided ([#80572](https://github.com/kubernetes/kubernetes/pull/80572), [@knight42](https://github.com/knight42)) -- When the go-client reflector relists, the ResourceVersion list option is set to the reflector's latest synced resource version to ensure the reflector does not "go back in time" and reprocess events older than it has already processed. If the server responds with an HTTP 410 (Gone) status code response, the relist falls back to using `resourceVersion=""`. ([#83520](https://github.com/kubernetes/kubernetes/pull/83520), [@jpbetz](https://github.com/jpbetz)) -- Fix unsafe JSON construction in a number of locations in the codebase ([#81158](https://github.com/kubernetes/kubernetes/pull/81158), [@zouyee](https://github.com/zouyee)) -- Fixes a flaw (CVE-2019-11253) in json/yaml decoding where large or malformed documents could consume excessive server resources. Request bodies for normal API requests (create/delete/update/patch operations of regular resources) are now limited to 3MB. ([#83261](https://github.com/kubernetes/kubernetes/pull/83261), [@liggitt](https://github.com/liggitt)) -- CRDs can have fields named `type` with value `array` and nested array with `items` fields without validation to fall over this. ([#85223](https://github.com/kubernetes/kubernetes/pull/85223), [@sttts](https://github.com/sttts)) - -### Apps - -- Support Service Topology ([#72046](https://github.com/kubernetes/kubernetes/pull/72046), [@m1093782566](https://github.com/m1093782566)) -- Finalizer Protection for Service LoadBalancers is now in GA (enabled by default). This feature ensures the Service resource is not fully deleted until the correlating load balancer resources are deleted. ([#85023](https://github.com/kubernetes/kubernetes/pull/85023), [@MrHohn](https://github.com/MrHohn)) -- Pod process namespace sharing is now Generally Available. The `PodShareProcessNamespace` feature gate is now deprecated and will be removed in Kubernetes 1.19. ([#84356](https://github.com/kubernetes/kubernetes/pull/84356), [@verb](https://github.com/verb)) -- Fix handling tombstones in pod-disruption-budged controller. ([#83951](https://github.com/kubernetes/kubernetes/pull/83951), [@zouyee](https://github.com/zouyee)) -- Fixed the bug that deleted services were processed by EndpointSliceController repeatedly even their cleanup were successful. ([#82996](https://github.com/kubernetes/kubernetes/pull/82996), [@tnqn](https://github.com/tnqn)) -- Add `RequiresExactMatch` for `label.Selector` ([#85048](https://github.com/kubernetes/kubernetes/pull/85048), [@shaloulcy](https://github.com/shaloulcy)) -- Adds a new label to indicate what is managing an EndpointSlice. ([#83965](https://github.com/kubernetes/kubernetes/pull/83965), [@robscott](https://github.com/robscott)) -- Fix handling tombstones in pod-disruption-budged controller. ([#83951](https://github.com/kubernetes/kubernetes/pull/83951), [@zouyee](https://github.com/zouyee)) -- Fixed the bug that deleted services were processed by EndpointSliceController repeatedly even their cleanup were successful. ([#82996](https://github.com/kubernetes/kubernetes/pull/82996), [@tnqn](https://github.com/tnqn)) -- An end-user may choose to request logs without confirming the identity of the backing kubelet. This feature can be disabled by setting the `AllowInsecureBackendProxy` feature-gate to false. ([#83419](https://github.com/kubernetes/kubernetes/pull/83419), [@deads2k](https://github.com/deads2k)) -- When scaling down a ReplicaSet, delete doubled up replicas first, where a "doubled up replica" is defined as one that is on the same node as an active replica belonging to a related ReplicaSet. ReplicaSets are considered "related" if they have a common controller (typically a Deployment). ([#80004](https://github.com/kubernetes/kubernetes/pull/80004), [@Miciah](https://github.com/Miciah)) -- Kube-controller-manager: Fixes bug setting headless service labels on endpoints ([#85361](https://github.com/kubernetes/kubernetes/pull/85361), [@liggitt](https://github.com/liggitt)) -- People can see the right log and note. ([#84637](https://github.com/kubernetes/kubernetes/pull/84637), [@zhipengzuo](https://github.com/zhipengzuo)) -- Clean duplicate GetPodServiceMemberships function ([#83902](https://github.com/kubernetes/kubernetes/pull/83902), [@gongguan](https://github.com/gongguan)) - -### Auth - -- K8s docker config json secrets are now compatible with docker config desktop authentication credentials files ([#82148](https://github.com/kubernetes/kubernetes/pull/82148), [@bbourbie](https://github.com/bbourbie)) -- Kubelet and aggregated API servers now use v1 TokenReview and SubjectAccessReview endpoints to check authentication/authorization. ([#84768](https://github.com/kubernetes/kubernetes/pull/84768), [@liggitt](https://github.com/liggitt)) -- Kube-apiserver can now specify `--authentication-token-webhook-version=v1` or `--authorization-webhook-version=v1` to use `v1` TokenReview and SubjectAccessReview API objects when communicating with authentication and authorization webhooks. ([#84768](https://github.com/kubernetes/kubernetes/pull/84768), [@liggitt](https://github.com/liggitt)) -- Authentication token cache size is increased (from 4k to 32k) to support clusters with many nodes or many namespaces with active service accounts. ([#83643](https://github.com/kubernetes/kubernetes/pull/83643), [@lavalamp](https://github.com/lavalamp)) -- Apiservers based on k8s.io/apiserver with delegated authn based on cluster authentication will automatically update to new authentication information when the authoritative configmap is updated. ([#85004](https://github.com/kubernetes/kubernetes/pull/85004), [@deads2k](https://github.com/deads2k)) -- Configmaps/extension-apiserver-authentication in kube-system is continuously updated by kube-apiservers, instead of just at apiserver start ([#82705](https://github.com/kubernetes/kubernetes/pull/82705), [@deads2k](https://github.com/deads2k)) - -### CLI - -- Fixed kubectl endpointslice output for get requests ([#82603](https://github.com/kubernetes/kubernetes/pull/82603), [@robscott](https://github.com/robscott)) -- Gives the right error message when using `kubectl delete` a wrong resource. ([#83825](https://github.com/kubernetes/kubernetes/pull/83825), [@zhouya0](https://github.com/zhouya0)) -- If a bad flag is supplied to a kubectl command, only a tip to run `--help` is printed, instead of the usage menu. Usage menu is printed upon running `kubectl command --help`. ([#82423](https://github.com/kubernetes/kubernetes/pull/82423), [@sallyom](https://github.com/sallyom)) -- Commands like `kubectl apply` now return errors if schema-invalid annotations are specified, rather than silently dropping the entire annotations section. ([#83552](https://github.com/kubernetes/kubernetes/pull/83552), [@liggitt](https://github.com/liggitt)) -- Fixes spurious 0 revisions listed when running `kubectl rollout history` for a StatefulSet ([#82643](https://github.com/kubernetes/kubernetes/pull/82643), [@ZP-AlwaysWin](https://github.com/ZP-AlwaysWin)) -- Correct a reference to a not/no longer used kustomize subcommand in the documentation ([#82535](https://github.com/kubernetes/kubernetes/pull/82535), [@demobox](https://github.com/demobox)) -- Kubectl set resources will no longer return an error if passed an empty change for a resource. kubectl set subject will no longer return an error if passed an empty change for a resource. ([#85490](https://github.com/kubernetes/kubernetes/pull/85490), [@sallyom](https://github.com/sallyom)) -- Kubectl: --resource-version now works properly in label/annotate/set selector commands when racing with other clients to update the target object ([#85285](https://github.com/kubernetes/kubernetes/pull/85285), [@liggitt](https://github.com/liggitt)) -- The `--certificate-authority` flag now correctly overrides existing skip-TLS or CA data settings in the kubeconfig file ([#83547](https://github.com/kubernetes/kubernetes/pull/83547), [@liggitt](https://github.com/liggitt)) - -### Cloud Provider - -- Azure: update disk lock logic per vm during attach/detach to allow concurrent updates for different nodes. ([#85115](https://github.com/kubernetes/kubernetes/pull/85115), [@aramase](https://github.com/aramase)) -- Fix vmss dirty cache issue in disk attach/detach on vmss node ([#85158](https://github.com/kubernetes/kubernetes/pull/85158), [@andyzhangx](https://github.com/andyzhangx)) -- Fix race condition when attach/delete azure disk in same time ([#84917](https://github.com/kubernetes/kubernetes/pull/84917), [@andyzhangx](https://github.com/andyzhangx)) -- Change GCP ILB firewall names to contain the `k8s-fw-` prefix like the rest of the firewall rules. This is needed for consistency and also for other components to identify the firewall rule as k8s/service-controller managed. ([#84622](https://github.com/kubernetes/kubernetes/pull/84622), [@prameshj](https://github.com/prameshj)) -- Ensure health probes are created for local traffic policy UDP services on Azure ([#84802](https://github.com/kubernetes/kubernetes/pull/84802), [@feiskyer](https://github.com/feiskyer)) -- Openstack: Do not delete managed LB in case of security group reconciliation errors ([#82264](https://github.com/kubernetes/kubernetes/pull/82264), [@multi-io](https://github.com/multi-io)) -- Fix aggressive VM calls for Azure VMSS ([#83102](https://github.com/kubernetes/kubernetes/pull/83102), [@feiskyer](https://github.com/feiskyer)) -- Fix: azure disk detach failure if node not exists ([#82640](https://github.com/kubernetes/kubernetes/pull/82640), [@andyzhangx](https://github.com/andyzhangx)) -- Add azure disk encryption(SSE+CMK) support ([#84605](https://github.com/kubernetes/kubernetes/pull/84605), [@andyzhangx](https://github.com/andyzhangx)) -- Update Azure SDK versions to v35.0.0 ([#84543](https://github.com/kubernetes/kubernetes/pull/84543), [@andyzhangx](https://github.com/andyzhangx)) -- Azure: Add allow unsafe read from cache ([#83685](https://github.com/kubernetes/kubernetes/pull/83685), [@aramase](https://github.com/aramase)) -- Reduces the number of calls made to the Azure API when requesting the instance view of a virtual machine scale set node. ([#82496](https://github.com/kubernetes/kubernetes/pull/82496), [@hasheddan](https://github.com/hasheddan)) -- Added cloud operation count metrics to azure cloud controller manager. ([#82574](https://github.com/kubernetes/kubernetes/pull/82574), [@kkmsft](https://github.com/kkmsft)) -- On AWS nodes with multiple network interfaces, kubelet should now more reliably report the same primary node IP. ([#80747](https://github.com/kubernetes/kubernetes/pull/80747), [@danwinship](https://github.com/danwinship)) -- Update Azure load balancer to prevent orphaned public IP addresses ([#82890](https://github.com/kubernetes/kubernetes/pull/82890), [@chewong](https://github.com/chewong)) - -### Cluster Lifecycle - -- Kubeadm alpha certs command now skip missing files ([#85092](https://github.com/kubernetes/kubernetes/pull/85092), [@fabriziopandini](https://github.com/fabriziopandini)) -- Kubeadm: the command "kubeadm token create" now has a "--certificate-key" flag that can be used for the formation of join commands for control-planes with automatic copy of certificates ([#84591](https://github.com/kubernetes/kubernetes/pull/84591), [@TheLastProject](https://github.com/TheLastProject)) -- Kubeadm: Fix a bug where kubeadm cannot parse kubelet's version if the latter dumps logs on the standard error. ([#85351](https://github.com/kubernetes/kubernetes/pull/85351), [@rosti](https://github.com/rosti)) -- Kubeadm: added retry to all the calls to the etcd API so kubeadm will be more resilient to network glitches ([#85201](https://github.com/kubernetes/kubernetes/pull/85201), [@fabriziopandini](https://github.com/fabriziopandini)) -- Fixes a bug in kubeadm that caused init and join to hang indefinitely in specific conditions. ([#85156](https://github.com/kubernetes/kubernetes/pull/85156), [@chuckha](https://github.com/chuckha)) -- Kubeadm now includes CoreDNS version 1.6.5 - - `kubernetes` plugin adds metrics to measure kubernetes control plane latency. - - the `health` plugin now includes the `lameduck` option by default, which waits for a duration before shutting down. ([#85109](https://github.com/kubernetes/kubernetes/pull/85109), [@rajansandeep](https://github.com/rajansandeep)) -- Fixed bug when using kubeadm alpha certs commands with clusters using external etcd ([#85091](https://github.com/kubernetes/kubernetes/pull/85091), [@fabriziopandini](https://github.com/fabriziopandini)) -- Kubeadm no longer defaults or validates the component configs of the kubelet or kube-proxy ([#79223](https://github.com/kubernetes/kubernetes/pull/79223), [@rosti](https://github.com/rosti)) -- Kubeadm: remove the deprecated `--cri-socket` flag for `kubeadm upgrade apply`. The flag has been deprecated since v1.14. ([#85044](https://github.com/kubernetes/kubernetes/pull/85044), [@neolit123](https://github.com/neolit123)) -- Kubeadm: prevent potential hanging of commands such as "kubeadm reset" if the apiserver endpoint is not reachable. ([#84648](https://github.com/kubernetes/kubernetes/pull/84648), [@neolit123](https://github.com/neolit123)) -- Kubeadm: fix skipped etcd upgrade on secondary control-plane nodes when the command `kubeadm upgrade node` is used. ([#85024](https://github.com/kubernetes/kubernetes/pull/85024), [@neolit123](https://github.com/neolit123)) -- Kubeadm: fix an issue with the kube-proxy container env. variables ([#84888](https://github.com/kubernetes/kubernetes/pull/84888), [@neolit123](https://github.com/neolit123)) -- Utilize diagnostics tool to dump GKE windows test logs ([#83517](https://github.com/kubernetes/kubernetes/pull/83517), [@YangLu1031](https://github.com/YangLu1031)) -- Kubeadm: always mount the kube-controller-manager hostPath volume that is given by the `--flex-volume-plugin-dir` flag. ([#84468](https://github.com/kubernetes/kubernetes/pull/84468), [@neolit123](https://github.com/neolit123)) -- Update Cluster Autoscaler version to 1.16.2 (CA release docs: https://github.com/kubernetes/autoscaler/releases/tag/cluster-autoscaler-1.16.2) ([#84038](https://github.com/kubernetes/kubernetes/pull/84038), [@losipiuk](https://github.com/losipiuk)) -- Kubeadm no longer removes /etc/cni/net.d as it does not install it. Users should remove files from it manually or rely on the component that created them ([#83950](https://github.com/kubernetes/kubernetes/pull/83950), [@yastij](https://github.com/yastij)) -- Kubeadm: fix wrong default value for the `upgrade node --certificate-renewal` flag. ([#83528](https://github.com/kubernetes/kubernetes/pull/83528), [@neolit123](https://github.com/neolit123)) -- Bump metrics-server to v0.3.5 ([#83015](https://github.com/kubernetes/kubernetes/pull/83015), [@olagacek](https://github.com/olagacek)) -- Dashboard: disable the dashboard Deployment on non-Linux nodes. This step is required to support Windows worker nodes. ([#82975](https://github.com/kubernetes/kubernetes/pull/82975), [@wawa0210](https://github.com/wawa0210)) -- Fixes a panic in kube-controller-manager cleaning up bootstrap tokens ([#82887](https://github.com/kubernetes/kubernetes/pull/82887), [@tedyu](https://github.com/tedyu)) -- Kubeadm: add a new `kubelet-finalize` phase as part of the `init` workflow and an experimental sub-phase to enable automatic kubelet client certificate rotation on primary control-plane nodes. - - Prior to 1.17 and for existing nodes created by `kubeadm init` where kubelet client certificate rotation is desired, you must modify "/etc/kubernetes/kubelet.conf" to point to the PEM symlink for rotation: - - `client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem` and `client-key: /var/lib/kubelet/pki/kubelet-client-current.pem`, replacing the embedded client certificate and key. ([#84118](https://github.com/kubernetes/kubernetes/pull/84118), [@neolit123](https://github.com/neolit123)) - -- Kubeadm: add a upgrade health check that deploys a Job ([#81319](https://github.com/kubernetes/kubernetes/pull/81319), [@neolit123](https://github.com/neolit123)) -- Kubeadm now supports automatic calculations of dual-stack node cidr masks to kube-controller-manager. ([#85609](https://github.com/kubernetes/kubernetes/pull/85609), [@Arvinderpal](https://github.com/Arvinderpal)) -- Kubeadm: reset raises warnings if it cannot delete folders ([#85265](https://github.com/kubernetes/kubernetes/pull/85265), [@SataQiu](https://github.com/SataQiu)) -- Kubeadm: enable the usage of the secure kube-scheduler and kube-controller-manager ports for health checks. For kube-scheduler was 10251, becomes 10259. For kube-controller-manager was 10252, becomes 10257. ([#85043](https://github.com/kubernetes/kubernetes/pull/85043), [@neolit123](https://github.com/neolit123)) -- A new kubelet command line option, `--reserved-cpus`, is introduced to explicitly define the CPU list that will be reserved for system. For example, if `--reserved-cpus=0,1,2,3` is specified, then cpu 0,1,2,3 will be reserved for the system. On a system with 24 CPUs, the user may specify `isolcpus=4-23` for the kernel option and use CPU 4-23 for the user containers. ([#83592](https://github.com/kubernetes/kubernetes/pull/83592), [@jianzzha](https://github.com/jianzzha)) -- Kubelet: a configuration file specified via `--config` is now loaded with strict deserialization, which fails if the config file contains duplicate or unknown fields. This protects against accidentally running with config files that are malformed, mis-indented, or have typos in field names, and getting unexpected behavior. ([#83204](https://github.com/kubernetes/kubernetes/pull/83204), [@obitech](https://github.com/obitech)) -- Kubeadm now propagates proxy environment variables to kube-proxy ([#84559](https://github.com/kubernetes/kubernetes/pull/84559), [@yastij](https://github.com/yastij)) -- Update the latest validated version of Docker to 19.03 ([#84476](https://github.com/kubernetes/kubernetes/pull/84476), [@neolit123](https://github.com/neolit123)) -- Update to Ingress-GCE v1.6.1 ([#84018](https://github.com/kubernetes/kubernetes/pull/84018), [@rramkumar1](https://github.com/rramkumar1)) -- Kubeadm: enhance certs check-expiration to show the expiration info of related CAs ([#83932](https://github.com/kubernetes/kubernetes/pull/83932), [@SataQiu](https://github.com/SataQiu)) -- Kubeadm: implemented structured output of 'kubeadm token list' in JSON, YAML, Go template and JsonPath formats ([#78764](https://github.com/kubernetes/kubernetes/pull/78764), [@bart0sh](https://github.com/bart0sh)) -- Kubeadm: add support for `127.0.0.1` as advertise address. kubeadm will automatically replace this value with matching global unicast IP address on the loopback interface. ([#83475](https://github.com/kubernetes/kubernetes/pull/83475), [@fabriziopandini](https://github.com/fabriziopandini)) -- Kube-scheduler: a configuration file specified via `--config` is now loaded with strict deserialization, which fails if the config file contains duplicate or unknown fields. This protects against accidentally running with config files that are malformed, mis-indented, or have typos in field names, and getting unexpected behavior. ([#83030](https://github.com/kubernetes/kubernetes/pull/83030), [@obitech](https://github.com/obitech)) -- Kubeadm: use the `--service-cluster-ip-range` flag to init or use the ServiceSubnet field in the kubeadm config to pass a comma separated list of Service CIDRs. ([#82473](https://github.com/kubernetes/kubernetes/pull/82473), [@Arvinderpal](https://github.com/Arvinderpal)) -- Update crictl to v1.16.1. ([#82856](https://github.com/kubernetes/kubernetes/pull/82856), [@Random-Liu](https://github.com/Random-Liu)) -- Bump addon-resizer to 1.8.7 to fix issues with using deprecated extensions APIs ([#85864](https://github.com/kubernetes/kubernetes/pull/85864), [@liggitt](https://github.com/liggitt)) -- Simple script based hyperkube image that bundles all the necessary binaries. This is an equivalent replacement for the image based on the go based hyperkube command + image. ([#84662](https://github.com/kubernetes/kubernetes/pull/84662), [@dims](https://github.com/dims)) -- Hyperkube will now be available in a new Github repository and will not be included in the kubernetes release from 1.17 onwards ([#83454](https://github.com/kubernetes/kubernetes/pull/83454), [@dims](https://github.com/dims)) -- Remove prometheus cluster monitoring addon from kube-up ([#83442](https://github.com/kubernetes/kubernetes/pull/83442), [@serathius](https://github.com/serathius)) -- SourcesReady provides the readiness of kubelet configuration sources such as apiserver update readiness. ([#81344](https://github.com/kubernetes/kubernetes/pull/81344), [@zouyee](https://github.com/zouyee)) -- This PR sets the --cluster-dns flag value to kube-dns service IP whether or not NodeLocal DNSCache is enabled. NodeLocal DNSCache will listen on both the link-local as well as the service IP. ([#84383](https://github.com/kubernetes/kubernetes/pull/84383), [@prameshj](https://github.com/prameshj)) -- kube-dns add-on: - - All containers are now being executed under more restrictive privileges. - - Most of the containers now run as non-root user and has the root filesystem set as read-only. - - The remaining container running as root only has the minimum Linux capabilities it requires to run. - - Privilege escalation has been disabled for all containers. ([#82347](https://github.com/kubernetes/kubernetes/pull/82347), [@pjbgf](https://github.com/pjbgf)) -- Kubernetes no longer monitors firewalld. On systems using firewalld for firewall - maintenance, kube-proxy will take slightly longer to recover from disruptive - firewalld operations that delete kube-proxy's iptables rules. - - As a side effect of these changes, kube-proxy's - `sync_proxy_rules_last_timestamp_seconds` metric no longer behaves the - way it used to; now it will only change when services or endpoints actually - change, rather than reliably updating every 60 seconds (or whatever). If you - are trying to monitor for whether iptables updates are failing, the - `sync_proxy_rules_iptables_restore_failures_total` metric may be more useful. ([#81517](https://github.com/kubernetes/kubernetes/pull/81517), [@danwinship](https://github.com/danwinship)) - -### Instrumentation - -- Bump version of event-exporter to 0.3.1, to switch it to protobuf. ([#83396](https://github.com/kubernetes/kubernetes/pull/83396), [@loburm](https://github.com/loburm)) -- Bumps metrics-server version to v0.3.6 with following bugfix: - - Don't break metric storage when duplicate pod metrics encountered causing hpa to fail ([#83907](https://github.com/kubernetes/kubernetes/pull/83907), [@olagacek](https://github.com/olagacek)) -- addons: elasticsearch discovery supports IPv6 ([#85543](https://github.com/kubernetes/kubernetes/pull/85543), [@SataQiu](https://github.com/SataQiu)) -- Update Cluster Autoscaler to 1.17.0; changelog: https://github.com/kubernetes/autoscaler/releases/tag/cluster-autoscaler-1.17.0 ([#85610](https://github.com/kubernetes/kubernetes/pull/85610), [@losipiuk](https://github.com/losipiuk)) - -### Network - -- The official kube-proxy image (used by kubeadm, among other things) is now compatible with systems running iptables 1.8 in "nft" mode, and will autodetect which mode it should use. ([#82966](https://github.com/kubernetes/kubernetes/pull/82966), [@danwinship](https://github.com/danwinship)) -- Kubenet: added HostPort IPv6 support. HostPortManager: operates only with one IP family, failing if receives port mapping entries with different IP families. HostPortSyncer: operates only with one IP family, skipping portmap entries with different IP families ([#80854](https://github.com/kubernetes/kubernetes/pull/80854), [@aojea](https://github.com/aojea)) -- Kube-proxy now supports DualStack feature with EndpointSlices and IPVS. ([#85246](https://github.com/kubernetes/kubernetes/pull/85246), [@robscott](https://github.com/robscott)) -- Remove redundant API validation when using Service Topology with externalTrafficPolicy=Local ([#85346](https://github.com/kubernetes/kubernetes/pull/85346), [@andrewsykim](https://github.com/andrewsykim)) -- Update github.com/vishvananda/netlink to v1.0.0 ([#83576](https://github.com/kubernetes/kubernetes/pull/83576), [@andrewsykim](https://github.com/andrewsykim)) -- `-- kube-controller-manager` - `--node-cidr-mask-size-ipv4 int32` Default: 24. Mask size for IPv4 node-cidr in dual-stack cluster. - `--node-cidr-mask-size-ipv6 int32` Default: 64. Mask size for IPv6 node-cidr in dual-stack cluster. - - These 2 flags can be used only for dual-stack clusters. For non dual-stack clusters, continue to use `--node-cidr-mask-size` flag to configure the mask size. - - The default node cidr mask size for IPv6 was 24 which is now changed to 64. ([#79993](https://github.com/kubernetes/kubernetes/pull/79993), [@aramase](https://github.com/aramase)) - -- deprecate cleanup-ipvs flag ([#83832](https://github.com/kubernetes/kubernetes/pull/83832), [@gongguan](https://github.com/gongguan)) -- Kube-proxy: emits a warning when a malformed component config file is used with v1alpha1. ([#84143](https://github.com/kubernetes/kubernetes/pull/84143), [@phenixblue](https://github.com/phenixblue)) -- Set config.BindAddress to IPv4 address `127.0.0.1` if not specified ([#83822](https://github.com/kubernetes/kubernetes/pull/83822), [@zouyee](https://github.com/zouyee)) -- Updated kube-proxy ipvs README with correct grep argument to list loaded ipvs modules ([#83677](https://github.com/kubernetes/kubernetes/pull/83677), [@pete911](https://github.com/pete911)) -- The userspace mode of kube-proxy no longer confusingly logs messages about deleting endpoints that it is actually adding. ([#83644](https://github.com/kubernetes/kubernetes/pull/83644), [@danwinship](https://github.com/danwinship)) -- Kube-proxy iptables probabilities are now more granular and will result in better distribution beyond 319 endpoints. ([#83599](https://github.com/kubernetes/kubernetes/pull/83599), [@robscott](https://github.com/robscott)) -- Significant kube-proxy performance improvements for non UDP ports. ([#83208](https://github.com/kubernetes/kubernetes/pull/83208), [@robscott](https://github.com/robscott)) -- Improved performance of kube-proxy with EndpointSlice enabled with more efficient sorting. ([#83035](https://github.com/kubernetes/kubernetes/pull/83035), [@robscott](https://github.com/robscott)) -- EndpointSlices are now beta for better Network Endpoint performance at scale. ([#84390](https://github.com/kubernetes/kubernetes/pull/84390), [@robscott](https://github.com/robscott)) -- Updated EndpointSlices to use PublishNotReadyAddresses from Services. ([#84573](https://github.com/kubernetes/kubernetes/pull/84573), [@robscott](https://github.com/robscott)) -- When upgrading to 1.17 with a cluster with EndpointSlices enabled, the `endpointslice.kubernetes.io/managed-by` label needs to be set on each EndpointSlice. ([#85359](https://github.com/kubernetes/kubernetes/pull/85359), [@robscott](https://github.com/robscott)) -- Adds FQDN addressType support for EndpointSlice. ([#84091](https://github.com/kubernetes/kubernetes/pull/84091), [@robscott](https://github.com/robscott)) -- Fix incorrect network policy description suggesting that pods are isolated when a network policy has no rules of a given type ([#84194](https://github.com/kubernetes/kubernetes/pull/84194), [@jackkleeman](https://github.com/jackkleeman)) -- Fix bug where EndpointSlice controller would attempt to modify shared objects. ([#85368](https://github.com/kubernetes/kubernetes/pull/85368), [@robscott](https://github.com/robscott)) -- Splitting IP address type into IPv4 and IPv6 for EndpointSlices ([#84971](https://github.com/kubernetes/kubernetes/pull/84971), [@robscott](https://github.com/robscott)) -- Added appProtocol field to EndpointSlice Port ([#83815](https://github.com/kubernetes/kubernetes/pull/83815), [@howardjohn](https://github.com/howardjohn)) -- The docker container runtime now enforces a 220 second timeout on container network operations. ([#71653](https://github.com/kubernetes/kubernetes/pull/71653), [@liucimin](https://github.com/liucimin)) -- Fix panic in kubelet when running IPv4/IPv6 dual-stack mode with a CNI plugin ([#82508](https://github.com/kubernetes/kubernetes/pull/82508), [@aanm](https://github.com/aanm)) -- EndpointSlice hostname is now set in the same conditions Endpoints hostname is. ([#84207](https://github.com/kubernetes/kubernetes/pull/84207), [@robscott](https://github.com/robscott)) -- Improving the performance of Endpoint and EndpointSlice controllers by caching Service Selectors ([#84280](https://github.com/kubernetes/kubernetes/pull/84280), [@gongguan](https://github.com/gongguan)) -- Significant kube-proxy performance improvements when using Endpoint Slices at scale. ([#83206](https://github.com/kubernetes/kubernetes/pull/83206), [@robscott](https://github.com/robscott)) - -### Node - -- Mirror pods now include an ownerReference for the node that created them. ([#84485](https://github.com/kubernetes/kubernetes/pull/84485), [@tallclair](https://github.com/tallclair)) -- Fixed a bug in the single-numa-policy of the TopologyManager. Previously, best-effort pods would result in a terminated state with a TopologyAffinity error. Now they will run as expected. ([#83777](https://github.com/kubernetes/kubernetes/pull/83777), [@lmdaly](https://github.com/lmdaly)) -- Fixed a bug in the single-numa-node policy of the TopologyManager. Previously, pods that only requested CPU resources and did not request any third-party devices would fail to launch with a TopologyAffinity error. Now they will launch successfully. ([#83697](https://github.com/kubernetes/kubernetes/pull/83697), [@klueska](https://github.com/klueska)) -- Fix error where metrics related to dynamic kubelet config isn't registered ([#83184](https://github.com/kubernetes/kubernetes/pull/83184), [@odinuge](https://github.com/odinuge)) -- If container fails because ContainerCannotRun, do not utilize the FallbackToLogsOnError TerminationMessagePolicy, as it masks more useful logs. ([#81280](https://github.com/kubernetes/kubernetes/pull/81280), [@yqwang-ms](https://github.com/yqwang-ms)) -- Use online nodes instead of possible nodes when discovering available NUMA nodes ([#83196](https://github.com/kubernetes/kubernetes/pull/83196), [@zouyee](https://github.com/zouyee)) -- Use IPv4 in wincat port forward. ([#83036](https://github.com/kubernetes/kubernetes/pull/83036), [@liyanhui1228](https://github.com/liyanhui1228)) -- Single static pod files and pod files from http endpoints cannot be larger than 10 MB. HTTP probe payloads are now truncated to 10KB. ([#82669](https://github.com/kubernetes/kubernetes/pull/82669), [@rphillips](https://github.com/rphillips)) -- Limit the body length of exec readiness/liveness probes. remote CRIs and Docker shim read a max of 16MB output of which the exec probe itself inspects 10kb. ([#82514](https://github.com/kubernetes/kubernetes/pull/82514), [@dims](https://github.com/dims)) -- Kubelet: Added kubelet serving certificate metric `server_rotation_seconds` which is a histogram reporting the age of a just rotated serving certificate in seconds. ([#84534](https://github.com/kubernetes/kubernetes/pull/84534), [@sambdavidson](https://github.com/sambdavidson)) -- Reduce default NodeStatusReportFrequency to 5 minutes. With this change, periodic node status updates will be send every 5m if node status doesn't change (otherwise they are still send with 10s). - - Bump NodeProblemDetector version to v0.8.0 to reduce forced NodeStatus updates frequency to 5 minutes. ([#84007](https://github.com/kubernetes/kubernetes/pull/84007), [@wojtek-t](https://github.com/wojtek-t)) - -- The topology manager aligns resources for pods of all QoS classes with respect to NUMA locality, not just Guaranteed QoS pods. ([#83492](https://github.com/kubernetes/kubernetes/pull/83492), [@ConnorDoyle](https://github.com/ConnorDoyle)) -- Fix a bug that a node Lease object may have been created without OwnerReference. ([#84998](https://github.com/kubernetes/kubernetes/pull/84998), [@wojtek-t](https://github.com/wojtek-t)) -- External facing APIs in plugin registration and device plugin packages are now available under k8s.io/kubelet/pkg/apis/ ([#83551](https://github.com/kubernetes/kubernetes/pull/83551), [@dims](https://github.com/dims)) - -### Release - -- Added the `crictl` Windows binaries as well as the Linux 32bit binary to the release archives ([#83944](https://github.com/kubernetes/kubernetes/pull/83944), [@saschagrunert](https://github.com/saschagrunert)) -- Bumps the minimum version of Go required for building Kubernetes to 1.12.4. ([#83596](https://github.com/kubernetes/kubernetes/pull/83596), [@jktomer](https://github.com/jktomer)) -- The deprecated mondo `kubernetes-test` tarball is no longer built. Users running Kubernetes e2e tests should use the `kubernetes-test-portable` and `kubernetes-test-{OS}-{ARCH}` tarballs instead. ([#83093](https://github.com/kubernetes/kubernetes/pull/83093), [@ixdy](https://github.com/ixdy)) - -### Scheduling - -- Only validate duplication of the RequestedToCapacityRatio custom priority and allow other custom predicates/priorities ([#84646](https://github.com/kubernetes/kubernetes/pull/84646), [@liu-cong](https://github.com/liu-cong)) -- Scheduler policy configs can no longer be declared multiple times ([#83963](https://github.com/kubernetes/kubernetes/pull/83963), [@damemi](https://github.com/damemi)) -- TaintNodesByCondition was graduated to GA, CheckNodeMemoryPressure, CheckNodePIDPressure, CheckNodeDiskPressure, CheckNodeCondition were accidentally removed since 1.12, the replacement is to use CheckNodeUnschedulablePred ([#84152](https://github.com/kubernetes/kubernetes/pull/84152), [@draveness](https://github.com/draveness)) -- [migration phase 1] PodFitsHostPorts as filter plugin ([#83659](https://github.com/kubernetes/kubernetes/pull/83659), [@wgliang](https://github.com/wgliang)) -- [migration phase 1] PodFitsResources as framework plugin ([#83650](https://github.com/kubernetes/kubernetes/pull/83650), [@wgliang](https://github.com/wgliang)) -- [migration phase 1] PodMatchNodeSelector/NodAffinity as filter plugin ([#83660](https://github.com/kubernetes/kubernetes/pull/83660), [@wgliang](https://github.com/wgliang)) -- Add more tracing steps in generic_scheduler ([#83539](https://github.com/kubernetes/kubernetes/pull/83539), [@wgliang](https://github.com/wgliang)) -- [migration phase 1] PodFitsHost as filter plugin ([#83662](https://github.com/kubernetes/kubernetes/pull/83662), [@wgliang](https://github.com/wgliang)) -- Fixed a scheduler panic when using PodAffinity. ([#82841](https://github.com/kubernetes/kubernetes/pull/82841), [@Huang-Wei](https://github.com/Huang-Wei)) -- Take the context as the first argument of Schedule. ([#82119](https://github.com/kubernetes/kubernetes/pull/82119), [@wgliang](https://github.com/wgliang)) -- Fixed an issue that the correct PluginConfig.Args is not passed to the corresponding PluginFactory in kube-scheduler when multiple PluginConfig items are defined. ([#82483](https://github.com/kubernetes/kubernetes/pull/82483), [@everpeace](https://github.com/everpeace)) -- Profiling is enabled by default in the scheduler ([#84835](https://github.com/kubernetes/kubernetes/pull/84835), [@denkensk](https://github.com/denkensk)) -- Scheduler now reports metrics on cache size including nodes, pods, and assumed pods ([#83508](https://github.com/kubernetes/kubernetes/pull/83508), [@damemi](https://github.com/damemi)) -- User can now use component config to configure NodeLabel plugin for the scheduler framework. ([#84297](https://github.com/kubernetes/kubernetes/pull/84297), [@liu-cong](https://github.com/liu-cong)) -- Optimize inter-pod affinity preferredDuringSchedulingIgnoredDuringExecution type, up to 4x in some cases. ([#84264](https://github.com/kubernetes/kubernetes/pull/84264), [@ahg-g](https://github.com/ahg-g)) -- Filter plugin for cloud provider storage predicate ([#84148](https://github.com/kubernetes/kubernetes/pull/84148), [@gongguan](https://github.com/gongguan)) -- Refactor scheduler's framework permit API. ([#83756](https://github.com/kubernetes/kubernetes/pull/83756), [@hex108](https://github.com/hex108)) -- Add incoming pods metrics to scheduler queue. ([#83577](https://github.com/kubernetes/kubernetes/pull/83577), [@liu-cong](https://github.com/liu-cong)) -- Allow dynamically set glog logging level of kube-scheduler ([#83910](https://github.com/kubernetes/kubernetes/pull/83910), [@mrkm4ntr](https://github.com/mrkm4ntr)) -- Add latency and request count metrics for scheduler framework. ([#83569](https://github.com/kubernetes/kubernetes/pull/83569), [@liu-cong](https://github.com/liu-cong)) -- Expose SharedInformerFactory in the framework handle ([#83663](https://github.com/kubernetes/kubernetes/pull/83663), [@draveness](https://github.com/draveness)) -- Add per-pod scheduling metrics across 1 or more schedule attempts. ([#83674](https://github.com/kubernetes/kubernetes/pull/83674), [@liu-cong](https://github.com/liu-cong)) -- Add `podInitialBackoffDurationSeconds` and `podMaxBackoffDurationSeconds` to the scheduler config API ([#81263](https://github.com/kubernetes/kubernetes/pull/81263), [@draveness](https://github.com/draveness)) -- Expose kubernetes client in the scheduling framework handle. ([#82432](https://github.com/kubernetes/kubernetes/pull/82432), [@draveness](https://github.com/draveness)) -- Remove MaxPriority in the scheduler API, please use MaxNodeScore or MaxExtenderPriority instead. ([#83386](https://github.com/kubernetes/kubernetes/pull/83386), [@draveness](https://github.com/draveness)) -- Consolidate ScoreWithNormalizePlugin into the ScorePlugin interface ([#83042](https://github.com/kubernetes/kubernetes/pull/83042), [@draveness](https://github.com/draveness)) -- New APIs to allow adding/removing pods from pre-calculated prefilter state in the scheduling framework ([#82912](https://github.com/kubernetes/kubernetes/pull/82912), [@ahg-g](https://github.com/ahg-g)) -- Added Clone method to the scheduling framework's PluginContext and ContextData. ([#82951](https://github.com/kubernetes/kubernetes/pull/82951), [@ahg-g](https://github.com/ahg-g)) -- Modified the scheduling framework's Filter API. ([#82842](https://github.com/kubernetes/kubernetes/pull/82842), [@ahg-g](https://github.com/ahg-g)) -- Critical pods can now be created in namespaces other than kube-system. To limit critical pods to the kube-system namespace, cluster admins should create an admission configuration file limiting critical pods by default, and a matching quota object in the `kube-system` namespace permitting critical pods in that namespace. See https://kubernetes.io/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default for details. ([#76310](https://github.com/kubernetes/kubernetes/pull/76310), [@ravisantoshgudimetla](https://github.com/ravisantoshgudimetla)) -- Scheduler ComponentConfig fields are now pointers ([#83619](https://github.com/kubernetes/kubernetes/pull/83619), [@damemi](https://github.com/damemi)) -- Scheduler Policy API has a new recommended apiVersion `apiVersion: kubescheduler.config.k8s.io/v1` which is consistent with the scheduler API group `kubescheduler.config.k8s.io`. It holds the same API as the old apiVersion `apiVersion: v1`. ([#83578](https://github.com/kubernetes/kubernetes/pull/83578), [@Huang-Wei](https://github.com/Huang-Wei)) -- Rename PluginContext to CycleState in the scheduling framework ([#83430](https://github.com/kubernetes/kubernetes/pull/83430), [@draveness](https://github.com/draveness)) -- Some scheduler extender API fields are moved from `pkg/scheduler/api` to `pkg/scheduler/apis/extender/v1`. ([#83262](https://github.com/kubernetes/kubernetes/pull/83262), [@Huang-Wei](https://github.com/Huang-Wei)) -- Kube-scheduler: emits a warning when a malformed component config file is used with v1alpha1. ([#84129](https://github.com/kubernetes/kubernetes/pull/84129), [@obitech](https://github.com/obitech)) -- Kube-scheduler now falls back to emitting events using core/v1 Events when events.k8s.io/v1beta1 is disabled. ([#83692](https://github.com/kubernetes/kubernetes/pull/83692), [@yastij](https://github.com/yastij)) -- Expand scheduler priority functions and scheduling framework plugins' node score range to [0, 100]. Note: this change is internal and does not affect extender and RequestedToCapacityRatio custom priority, which are still expected to provide a [0, 10] range. ([#83522](https://github.com/kubernetes/kubernetes/pull/83522), [@draveness](https://github.com/draveness)) - -### Storage - -- Bump CSI version to 1.2.0 ([#84832](https://github.com/kubernetes/kubernetes/pull/84832), [@gnufied](https://github.com/gnufied)) -- CSI Migration: Fixes issue where all volumes with the same inline volume inner spec name were staged in the same path. Migrated inline volumes are now staged at a unique path per unique volume. ([#84754](https://github.com/kubernetes/kubernetes/pull/84754), [@davidz627](https://github.com/davidz627)) -- CSI Migration: GCE PD access mode now reflects read only status of inline volumes - this allows multi-attach for read only many PDs ([#84809](https://github.com/kubernetes/kubernetes/pull/84809), [@davidz627](https://github.com/davidz627)) -- CSI detach timeout increased from 10 seconds to 2 minutes ([#84321](https://github.com/kubernetes/kubernetes/pull/84321), [@cduchesne](https://github.com/cduchesne)) -- Ceph RBD volume plugin now does not use any keyring (`/etc/ceph/ceph.client.lvs01cinder.keyring`, `/etc/ceph/ceph.keyring`, `/etc/ceph/keyring`, `/etc/ceph/keyring.bin`) for authentication. Ceph user credentials must be provided in PersistentVolume objects and referred Secrets. ([#75588](https://github.com/kubernetes/kubernetes/pull/75588), [@smileusd](https://github.com/smileusd)) -- Validate Gluster IP ([#83104](https://github.com/kubernetes/kubernetes/pull/83104), [@zouyee](https://github.com/zouyee)) -- PersistentVolumeLabel admission plugin, responsible for labeling `PersistentVolumes` with topology labels, now does not overwrite existing labels on PVs that were dynamically provisioned. It trusts the dynamic provisioning that it provided the correct labels to the `PersistentVolume`, saving one potentially expensive cloud API call. `PersistentVolumes` created manually by users are labelled by the admission plugin in the same way as before. ([#82830](https://github.com/kubernetes/kubernetes/pull/82830), [@jsafrane](https://github.com/jsafrane)) - -- Existing PVs are converted to use volume topology if migration is enabled. ([#83394](https://github.com/kubernetes/kubernetes/pull/83394), [@bertinatto](https://github.com/bertinatto)) -- local: support local filesystem volume with block resource reconstruction ([#84218](https://github.com/kubernetes/kubernetes/pull/84218), [@cofyc](https://github.com/cofyc)) -- Fixed binding of block PersistentVolumes / PersistentVolumeClaims when BlockVolume feature is off. ([#84049](https://github.com/kubernetes/kubernetes/pull/84049), [@jsafrane](https://github.com/jsafrane)) -- Report non-confusing error for negative storage size in PVC spec. ([#82759](https://github.com/kubernetes/kubernetes/pull/82759), [@sttts](https://github.com/sttts)) -- Fixed "requested device X but found Y" attach error on AWS. ([#85675](https://github.com/kubernetes/kubernetes/pull/85675), [@jsafrane](https://github.com/jsafrane)) -- Reduced frequency of DescribeVolumes calls of AWS API when attaching/detaching a volume. ([#84181](https://github.com/kubernetes/kubernetes/pull/84181), [@jsafrane](https://github.com/jsafrane)) -- Fixed attachment of AWS volumes that have just been detached. ([#83567](https://github.com/kubernetes/kubernetes/pull/83567), [@jsafrane](https://github.com/jsafrane)) -- Fix possible fd leak and closing of dirs when using openstack ([#82873](https://github.com/kubernetes/kubernetes/pull/82873), [@odinuge](https://github.com/odinuge)) -- local: support local volume block mode reconstruction ([#84173](https://github.com/kubernetes/kubernetes/pull/84173), [@cofyc](https://github.com/cofyc)) -- Fixed cleanup of raw block devices after kubelet restart. ([#83451](https://github.com/kubernetes/kubernetes/pull/83451), [@jsafrane](https://github.com/jsafrane)) -- Add data cache flushing during unmount device for GCE-PD driver in Windows Server. ([#83591](https://github.com/kubernetes/kubernetes/pull/83591), [@jingxu97](https://github.com/jingxu97)) - -### Windows - -- Adds Windows Server build information as a label on the node. ([#84472](https://github.com/kubernetes/kubernetes/pull/84472), [@gab-satchi](https://github.com/gab-satchi)) -- Fixes kube-proxy bug accessing self nodeip:port on windows ([#83027](https://github.com/kubernetes/kubernetes/pull/83027), [@liggitt](https://github.com/liggitt)) -- When using Containerd on Windows, the `TerminationMessagePath` file will now be mounted in the Windows Pod. ([#83057](https://github.com/kubernetes/kubernetes/pull/83057), [@bclau](https://github.com/bclau)) -- Fix kubelet metrics gathering on non-English Windows hosts ([#84156](https://github.com/kubernetes/kubernetes/pull/84156), [@wawa0210](https://github.com/wawa0210)) - -### Dependencies - -- Update etcd client side to v3.4.3 ([#83987](https://github.com/kubernetes/kubernetes/pull/83987), [@wenjiaswe](https://github.com/wenjiaswe)) -- Kubernetes now requires go1.13.4+ to build ([#82809](https://github.com/kubernetes/kubernetes/pull/82809), [@liggitt](https://github.com/liggitt)) -- Update to use go1.12.12 ([#84064](https://github.com/kubernetes/kubernetes/pull/84064), [@cblecker](https://github.com/cblecker)) -- Update to go 1.12.10 ([#83139](https://github.com/kubernetes/kubernetes/pull/83139), [@cblecker](https://github.com/cblecker)) -- Update default etcd server version to 3.4.3 ([#84329](https://github.com/kubernetes/kubernetes/pull/84329), [@jingyih](https://github.com/jingyih)) -- Upgrade default etcd server version to 3.3.17 ([#83804](https://github.com/kubernetes/kubernetes/pull/83804), [@jpbetz](https://github.com/jpbetz)) -- Upgrade to etcd client 3.3.17 to fix bug where etcd client does not parse IPv6 addresses correctly when members are joining, and to fix bug where failover on multi-member etcd cluster fails certificate check on DNS mismatch ([#83801](https://github.com/kubernetes/kubernetes/pull/83801), [@jpbetz](https://github.com/jpbetz)) - -### Detailed go Dependency Changes - -#### Added - -- github.com/OpenPeeDeeP/depguard: v1.0.1 -- github.com/StackExchange/wmi: 5d04971 -- github.com/agnivade/levenshtein: v1.0.1 -- github.com/alecthomas/template: a0175ee -- github.com/alecthomas/units: 2efee85 -- github.com/andreyvit/diff: c7f18ee -- github.com/anmitsu/go-shlex: 648efa6 -- github.com/bazelbuild/rules_go: 6dae44d -- github.com/bgentry/speakeasy: v0.1.0 -- github.com/bradfitz/go-smtpd: deb6d62 -- github.com/cockroachdb/datadriven: 80d97fb -- github.com/creack/pty: v1.1.7 -- github.com/gliderlabs/ssh: v0.1.1 -- github.com/go-critic/go-critic: 1df3008 -- github.com/go-kit/kit: v0.8.0 -- github.com/go-lintpack/lintpack: v0.5.2 -- github.com/go-logfmt/logfmt: v0.3.0 -- github.com/go-ole/go-ole: v1.2.1 -- github.com/go-stack/stack: v1.8.0 -- github.com/go-toolsmith/astcast: v1.0.0 -- github.com/go-toolsmith/astcopy: v1.0.0 -- github.com/go-toolsmith/astequal: v1.0.0 -- github.com/go-toolsmith/astfmt: v1.0.0 -- github.com/go-toolsmith/astinfo: 9809ff7 -- github.com/go-toolsmith/astp: v1.0.0 -- github.com/go-toolsmith/pkgload: v1.0.0 -- github.com/go-toolsmith/strparse: v1.0.0 -- github.com/go-toolsmith/typep: v1.0.0 -- github.com/gobwas/glob: v0.2.3 -- github.com/golangci/check: cfe4005 -- github.com/golangci/dupl: 3e9179a -- github.com/golangci/errcheck: ef45e06 -- github.com/golangci/go-misc: 927a3d8 -- github.com/golangci/go-tools: e32c541 -- github.com/golangci/goconst: 041c5f2 -- github.com/golangci/gocyclo: 2becd97 -- github.com/golangci/gofmt: 0b8337e -- github.com/golangci/golangci-lint: v1.18.0 -- github.com/golangci/gosec: 66fb7fc -- github.com/golangci/ineffassign: 42439a7 -- github.com/golangci/lint-1: ee948d0 -- github.com/golangci/maligned: b1d8939 -- github.com/golangci/misspell: 950f5d1 -- github.com/golangci/prealloc: 215b22d -- github.com/golangci/revgrep: d9c87f5 -- github.com/golangci/unconvert: 28b1c44 -- github.com/google/go-github: v17.0.0+incompatible -- github.com/google/go-querystring: v1.0.0 -- github.com/gostaticanalysis/analysisutil: v0.0.3 -- github.com/jellevandenhooff/dkim: f50fe3d -- github.com/julienschmidt/httprouter: v1.2.0 -- github.com/klauspost/compress: v1.4.1 -- github.com/kr/logfmt: b84e30a -- github.com/logrusorgru/aurora: a7b3b31 -- github.com/mattn/go-runewidth: v0.0.2 -- github.com/mattn/goveralls: v0.0.2 -- github.com/mitchellh/go-ps: 4fdf99a -- github.com/mozilla/tls-observatory: 8791a20 -- github.com/mwitkow/go-conntrack: cc309e4 -- github.com/nbutton23/zxcvbn-go: eafdab6 -- github.com/olekukonko/tablewriter: a0225b3 -- github.com/quasilyte/go-consistent: c6f3937 -- github.com/rogpeppe/fastuuid: 6724a57 -- github.com/ryanuber/go-glob: 256dc44 -- github.com/sergi/go-diff: v1.0.0 -- github.com/shirou/gopsutil: c95755e -- github.com/shirou/w32: bb4de01 -- github.com/shurcooL/go-goon: 37c2f52 -- github.com/shurcooL/go: 9e1955d -- github.com/sourcegraph/go-diff: v0.5.1 -- github.com/tarm/serial: 98f6abe -- github.com/tidwall/pretty: v1.0.0 -- github.com/timakin/bodyclose: 87058b9 -- github.com/ultraware/funlen: v0.0.2 -- github.com/urfave/cli: v1.20.0 -- github.com/valyala/bytebufferpool: v1.0.0 -- github.com/valyala/fasthttp: v1.2.0 -- github.com/valyala/quicktemplate: v1.1.1 -- github.com/valyala/tcplisten: ceec8f9 -- github.com/vektah/gqlparser: v1.1.2 -- go.etcd.io/etcd: 3cf2f69 -- go.mongodb.org/mongo-driver: v1.1.2 -- go4.org: 417644f -- golang.org/x/build: 2835ba2 -- golang.org/x/perf: 6e6d33e -- golang.org/x/xerrors: a985d34 -- gopkg.in/alecthomas/kingpin.v2: v2.2.6 -- gopkg.in/cheggaaa/pb.v1: v1.0.25 -- gopkg.in/resty.v1: v1.12.0 -- grpc.go4.org: 11d0a25 -- k8s.io/system-validators: v1.0.4 -- mvdan.cc/interfacer: c200402 -- mvdan.cc/lint: adc824a -- mvdan.cc/unparam: fbb5962 -- sourcegraph.com/sqs/pbtypes: d3ebe8f - -#### Changed - -- github.com/Azure/azure-sdk-for-go: v32.5.0+incompatible → v35.0.0+incompatible -- github.com/Microsoft/go-winio: v0.4.11 → v0.4.14 -- github.com/bazelbuild/bazel-gazelle: c728ce9 → 70208cb -- github.com/bazelbuild/buildtools: 80c7f0d → 69366ca -- github.com/beorn7/perks: 3a771d9 → v1.0.0 -- github.com/container-storage-interface/spec: v1.1.0 → v1.2.0 -- github.com/coredns/corefile-migration: v1.0.2 → v1.0.4 -- github.com/coreos/etcd: v3.3.17+incompatible → v3.3.10+incompatible -- github.com/coreos/go-systemd: 39ca1b0 → 95778df -- github.com/docker/go-units: v0.3.3 → v0.4.0 -- github.com/docker/libnetwork: a9cd636 → f0e46a7 -- github.com/fatih/color: v1.6.0 → v1.7.0 -- github.com/ghodss/yaml: c7ce166 → v1.0.0 -- github.com/go-openapi/analysis: v0.19.2 → v0.19.5 -- github.com/go-openapi/jsonpointer: v0.19.2 → v0.19.3 -- github.com/go-openapi/jsonreference: v0.19.2 → v0.19.3 -- github.com/go-openapi/loads: v0.19.2 → v0.19.4 -- github.com/go-openapi/runtime: v0.19.0 → v0.19.4 -- github.com/go-openapi/spec: v0.19.2 → v0.19.3 -- github.com/go-openapi/strfmt: v0.19.0 → v0.19.3 -- github.com/go-openapi/swag: v0.19.2 → v0.19.5 -- github.com/go-openapi/validate: v0.19.2 → v0.19.5 -- github.com/godbus/dbus: v4.1.0+incompatible → 2ff6f7f -- github.com/golang/protobuf: v1.3.1 → v1.3.2 -- github.com/google/btree: 4030bb1 → v1.0.0 -- github.com/google/cadvisor: v0.34.0 → v0.35.0 -- github.com/gregjones/httpcache: 787624d → 9cad4c3 -- github.com/grpc-ecosystem/go-grpc-middleware: cfaf568 → f849b54 -- github.com/grpc-ecosystem/grpc-gateway: v1.3.0 → v1.9.5 -- github.com/heketi/heketi: v9.0.0+incompatible → c2e2a4a -- github.com/json-iterator/go: v1.1.7 → v1.1.8 -- github.com/mailru/easyjson: 94de47d → v0.7.0 -- github.com/mattn/go-isatty: v0.0.3 → v0.0.9 -- github.com/mindprince/gonvml: fee913c → 9ebdce4 -- github.com/mrunalp/fileutils: 4ee1cc9 → 7d4729f -- github.com/munnerz/goautoneg: a547fc6 → a7dc8b6 -- github.com/onsi/ginkgo: v1.8.0 → v1.10.1 -- github.com/onsi/gomega: v1.5.0 → v1.7.0 -- github.com/opencontainers/runc: 6cc5158 → v1.0.0-rc9 -- github.com/opencontainers/selinux: v1.2.2 → 5215b18 -- github.com/pkg/errors: v0.8.0 → v0.8.1 -- github.com/prometheus/client_golang: v0.9.2 → v1.0.0 -- github.com/prometheus/client_model: 5c3871d → fd36f42 -- github.com/prometheus/common: 4724e92 → v0.4.1 -- github.com/prometheus/procfs: 1dc9a6c → v0.0.2 -- github.com/soheilhy/cmux: v0.1.3 → v0.1.4 -- github.com/spf13/pflag: v1.0.3 → v1.0.5 -- github.com/stretchr/testify: v1.3.0 → v1.4.0 -- github.com/syndtr/gocapability: e7cb7fa → d983527 -- github.com/vishvananda/netlink: b2de5d1 → v1.0.0 -- github.com/vmware/govmomi: v0.20.1 → v0.20.3 -- github.com/xiang90/probing: 07dd2e8 → 43a291a -- go.uber.org/atomic: 8dc6146 → v1.3.2 -- go.uber.org/multierr: ddea229 → v1.1.0 -- go.uber.org/zap: 67bc79d → v1.10.0 -- golang.org/x/crypto: e84da03 → 60c769a -- golang.org/x/lint: 8f45f77 → 959b441 -- golang.org/x/net: cdfb69a → 13f9640 -- golang.org/x/oauth2: 9f33145 → 0f29369 -- golang.org/x/sync: 42b3178 → cd5d95a -- golang.org/x/sys: 3b52091 → fde4db3 -- golang.org/x/text: e6919f6 → v0.3.2 -- golang.org/x/time: f51c127 → 9d24e82 -- golang.org/x/tools: 6e04913 → 65e3620 -- google.golang.org/grpc: v1.23.0 → v1.23.1 -- gopkg.in/inf.v0: v0.9.0 → v0.9.1 -- k8s.io/klog: v0.4.0 → v1.0.0 -- k8s.io/kube-openapi: 743ec37 → 30be4d1 -- k8s.io/repo-infra: 00fe14e → v0.0.1-alpha.1 -- k8s.io/utils: 581e001 → e782cd3 -- sigs.k8s.io/structured-merge-diff: 6149e45 → b1b620d - -#### Removed - -- github.com/cloudflare/cfssl: 56268a6 -- github.com/coreos/bbolt: v1.3.3 -- github.com/coreos/rkt: v1.30.0 -- github.com/globalsign/mgo: eeefdec -- github.com/google/certificate-transparency-go: v1.0.21 -- github.com/heketi/rest: aa6a652 -- github.com/heketi/utils: 435bc5b -- github.com/pborman/uuid: v1.2.0 + - kubelet_pod_worker_latency_microseconds + - kubelet_pod_start_latency_microseconds + - kubelet_cgroup_manager_latency_microseconds + - kubelet_pod_worker_start_latency_microseconds + - kubelet_pleg_relist_latency_microseconds + - kubelet_pleg_relist_interval_microseconds + - kubelet_eviction_stats_age_microseconds + - kubelet_runtime_operations + - kubelet_runtime_operations_latency_microseconds + - kubelet_runtime_operations_errors + - kubelet_device_plugin_registration_count + - kubelet_device_plugin_alloc_latency_microseconds + - kubelet_docker_operations + - kubelet_docker_operations_latency_microseconds + - kubelet_docker_operations_errors + - kubelet_docker_operations_timeout + - network_plugin_operations_latency_microseconds ([#83841](https://github.com/kubernetes/kubernetes/pull/83841), [@RainbowMango](https://github.com/RainbowMango)) [SIG Network and Node] +- Kube-apiserver metrics will now include request counts, latencies, and response sizes for /healthz, /livez, and /readyz requests. ([#83598](https://github.com/kubernetes/kubernetes/pull/83598), [@jktomer](https://github.com/jktomer)) [SIG API Machinery] +- Kubelet now exports a `server_expiration_renew_failure` and `client_expiration_renew_failure` metric counter if the certificate rotations cannot be performed. ([#84614](https://github.com/kubernetes/kubernetes/pull/84614), [@rphillips](https://github.com/rphillips)) [SIG API Machinery, Auth, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation, Node and Release] +- Kubelet: the metric process_start_time_seconds be marked as with the ALPHA stability level. ([#85446](https://github.com/kubernetes/kubernetes/pull/85446), [@RainbowMango](https://github.com/RainbowMango)) [SIG API Machinery, Cluster Lifecycle, Instrumentation and Node] +- New metric `kubelet_pleg_last_seen_seconds` to aid diagnosis of PLEG not healthy issues. ([#86251](https://github.com/kubernetes/kubernetes/pull/86251), [@bboreham](https://github.com/bboreham)) [SIG Node] + +### Other (Bug, Cleanup or Flake) + +- Fixed a regression with clients prior to 1.15 not being able to update podIP in pod status, or podCIDR in node spec, against >= 1.16 API servers ([#88505](https://github.com/kubernetes/kubernetes/pull/88505), [@liggitt](https://github.com/liggitt)) [SIG Apps and Network] +- Fixed "kubectl describe statefulsets.apps" printing garbage for rolling update partition ([#85846](https://github.com/kubernetes/kubernetes/pull/85846), [@phil9909](https://github.com/phil9909)) [SIG CLI] +- Add a event to PV when filesystem on PV does not match actual filesystem on disk ([#86982](https://github.com/kubernetes/kubernetes/pull/86982), [@gnufied](https://github.com/gnufied)) [SIG Storage] +- Add azure disk WriteAccelerator support ([#87945](https://github.com/kubernetes/kubernetes/pull/87945), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage] +- Add delays between goroutines for vm instance update ([#88094](https://github.com/kubernetes/kubernetes/pull/88094), [@aramase](https://github.com/aramase)) [SIG Cloud Provider] +- Add init containers log to cluster dump info. ([#88324](https://github.com/kubernetes/kubernetes/pull/88324), [@zhouya0](https://github.com/zhouya0)) [SIG CLI] +- Addons: elasticsearch discovery supports IPv6 ([#85543](https://github.com/kubernetes/kubernetes/pull/85543), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle and Instrumentation] +- Adds "volume.beta.kubernetes.io/migrated-to" annotation to PV's and PVC's when they are migrated to signal external provisioners to pick up those objects for Provisioning and Deleting. ([#87098](https://github.com/kubernetes/kubernetes/pull/87098), [@davidz627](https://github.com/davidz627)) [SIG Storage] +- All api-server log request lines in a more greppable format. ([#87203](https://github.com/kubernetes/kubernetes/pull/87203), [@lavalamp](https://github.com/lavalamp)) [SIG API Machinery] +- Azure VMSS LoadBalancerBackendAddressPools updating has been improved with sequential-sync + concurrent-async requests. ([#88699](https://github.com/kubernetes/kubernetes/pull/88699), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Azure cloud provider now obtains AAD token who audience claim will not have spn: prefix ([#87590](https://github.com/kubernetes/kubernetes/pull/87590), [@weinong](https://github.com/weinong)) [SIG Cloud Provider] +- AzureFile and CephFS use the new Mount library that prevents logging of sensitive mount options. ([#88684](https://github.com/kubernetes/kubernetes/pull/88684), [@saad-ali](https://github.com/saad-ali)) [SIG Storage] +- Bind dns-horizontal containers to linux nodes to avoid Windows scheduling on kubernetes cluster includes linux nodes and windows nodes ([#83364](https://github.com/kubernetes/kubernetes/pull/83364), [@wawa0210](https://github.com/wawa0210)) [SIG Cluster Lifecycle and Windows] +- Bind kube-dns containers to linux nodes to avoid Windows scheduling ([#83358](https://github.com/kubernetes/kubernetes/pull/83358), [@wawa0210](https://github.com/wawa0210)) [SIG Cluster Lifecycle and Windows] +- Bind metadata-agent containers to linux nodes to avoid Windows scheduling on kubernetes cluster includes linux nodes and windows nodes ([#83363](https://github.com/kubernetes/kubernetes/pull/83363), [@wawa0210](https://github.com/wawa0210)) [SIG Cluster Lifecycle, Instrumentation and Windows] +- Bind metrics-server containers to linux nodes to avoid Windows scheduling on kubernetes cluster includes linux nodes and windows nodes ([#83362](https://github.com/kubernetes/kubernetes/pull/83362), [@wawa0210](https://github.com/wawa0210)) [SIG Cluster Lifecycle, Instrumentation and Windows] +- Bug fixes: Make sure we include latest packages node #351 (@caseydavenport) ([#84163](https://github.com/kubernetes/kubernetes/pull/84163), [@david-tigera](https://github.com/david-tigera)) [SIG Cluster Lifecycle] +- CPU limits are now respected for Windows containers. If a node is over-provisioned, no weighting is used, only limits are respected. ([#86101](https://github.com/kubernetes/kubernetes/pull/86101), [@PatrickLang](https://github.com/PatrickLang)) [SIG Node, Testing and Windows] +- Changed core_pattern on COS nodes to be an absolute path. ([#86329](https://github.com/kubernetes/kubernetes/pull/86329), [@mml](https://github.com/mml)) [SIG Cluster Lifecycle and Node] +- Client-go certificate manager rotation gained the ability to preserve optional intermediate chains accompanying issued certificates ([#88744](https://github.com/kubernetes/kubernetes/pull/88744), [@jackkleeman](https://github.com/jackkleeman)) [SIG API Machinery and Auth] +- Cloud provider config CloudProviderBackoffMode has been removed since it won't be used anymore. ([#88463](https://github.com/kubernetes/kubernetes/pull/88463), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Conformance image now depends on stretch-slim instead of debian-hyperkube-base as that image is being deprecated and removed. ([#88702](https://github.com/kubernetes/kubernetes/pull/88702), [@dims](https://github.com/dims)) [SIG Cluster Lifecycle, Release and Testing] +- Deprecate --generator flag from kubectl create commands ([#88655](https://github.com/kubernetes/kubernetes/pull/88655), [@soltysh](https://github.com/soltysh)) [SIG CLI] +- During initialization phase (preflight), kubeadm now verifies the presence of the conntrack executable ([#85857](https://github.com/kubernetes/kubernetes/pull/85857), [@hnanni](https://github.com/hnanni)) [SIG Cluster Lifecycle] +- EndpointSlice should not contain endpoints for terminating pods ([#89056](https://github.com/kubernetes/kubernetes/pull/89056), [@andrewsykim](https://github.com/andrewsykim)) [SIG Apps and Network] +- Evictions due to pods breaching their ephemeral storage limits are now recorded by the `kubelet_evictions` metric and can be alerted on. ([#87906](https://github.com/kubernetes/kubernetes/pull/87906), [@smarterclayton](https://github.com/smarterclayton)) [SIG Node] +- Filter published OpenAPI schema by making nullable, required fields non-required in order to avoid kubectl to wrongly reject null values. ([#85722](https://github.com/kubernetes/kubernetes/pull/85722), [@sttts](https://github.com/sttts)) [SIG API Machinery] +- Fix /readyz to return error immediately after a shutdown is initiated, before the --shutdown-delay-duration has elapsed. ([#88911](https://github.com/kubernetes/kubernetes/pull/88911), [@tkashem](https://github.com/tkashem)) [SIG API Machinery] +- Fix API Server potential memory leak issue in processing watch request. ([#85410](https://github.com/kubernetes/kubernetes/pull/85410), [@answer1991](https://github.com/answer1991)) [SIG API Machinery] +- Fix EndpointSlice controller race condition and ensure that it handles external changes to EndpointSlices. ([#85703](https://github.com/kubernetes/kubernetes/pull/85703), [@robscott](https://github.com/robscott)) [SIG Apps and Network] +- Fix IPv6 addresses lost issue in pure ipv6 vsphere environment ([#86001](https://github.com/kubernetes/kubernetes/pull/86001), [@hubv](https://github.com/hubv)) [SIG Cloud Provider] +- Fix LoadBalancer rule checking so that no unexpected LoadBalancer updates are made ([#85990](https://github.com/kubernetes/kubernetes/pull/85990), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Fix a bug in kube-proxy that caused it to crash when using load balancers with a different IP family ([#87117](https://github.com/kubernetes/kubernetes/pull/87117), [@aojea](https://github.com/aojea)) [SIG Network] +- Fix a bug in port-forward: named port not working with service ([#85511](https://github.com/kubernetes/kubernetes/pull/85511), [@oke-py](https://github.com/oke-py)) [SIG CLI] +- Fix a bug in the dual-stack IPVS proxier where stale IPv6 endpoints were not being cleaned up ([#87695](https://github.com/kubernetes/kubernetes/pull/87695), [@andrewsykim](https://github.com/andrewsykim)) [SIG Network] +- Fix a bug that orphan revision cannot be adopted and statefulset cannot be synced ([#86801](https://github.com/kubernetes/kubernetes/pull/86801), [@likakuli](https://github.com/likakuli)) [SIG Apps] +- Fix a bug where ExternalTrafficPolicy is not applied to service ExternalIPs. ([#88786](https://github.com/kubernetes/kubernetes/pull/88786), [@freehan](https://github.com/freehan)) [SIG Network] +- Fix a bug where kubenet fails to parse the tc output. ([#83572](https://github.com/kubernetes/kubernetes/pull/83572), [@chendotjs](https://github.com/chendotjs)) [SIG Network] +- Fix a regression in kubenet that prevent pods to obtain ip addresses ([#85993](https://github.com/kubernetes/kubernetes/pull/85993), [@chendotjs](https://github.com/chendotjs)) [SIG Network and Node] +- Fix azure file AuthorizationFailure ([#85475](https://github.com/kubernetes/kubernetes/pull/85475), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage] +- Fix bug where EndpointSlice controller would attempt to modify shared objects. ([#85368](https://github.com/kubernetes/kubernetes/pull/85368), [@robscott](https://github.com/robscott)) [SIG API Machinery, Apps and Network] +- Fix handling of aws-load-balancer-security-groups annotation. Security-Groups assigned with this annotation are no longer modified by kubernetes which is the expected behaviour of most users. Also no unnecessary Security-Groups are created anymore if this annotation is used. ([#83446](https://github.com/kubernetes/kubernetes/pull/83446), [@Elias481](https://github.com/Elias481)) [SIG Cloud Provider] +- Fix invalid VMSS updates due to incorrect cache ([#89002](https://github.com/kubernetes/kubernetes/pull/89002), [@ArchangelSDY](https://github.com/ArchangelSDY)) [SIG Cloud Provider] +- Fix isCurrentInstance for Windows by removing the dependency of hostname. ([#89138](https://github.com/kubernetes/kubernetes/pull/89138), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Fix issue #85805 about a resource not found in azure cloud provider when LoadBalancer specified in another resource group. ([#86502](https://github.com/kubernetes/kubernetes/pull/86502), [@levimm](https://github.com/levimm)) [SIG Cloud Provider] +- Fix kubectl annotate error when local=true is set ([#86952](https://github.com/kubernetes/kubernetes/pull/86952), [@zhouya0](https://github.com/zhouya0)) [SIG CLI] +- Fix kubectl create deployment image name ([#86636](https://github.com/kubernetes/kubernetes/pull/86636), [@zhouya0](https://github.com/zhouya0)) [SIG CLI] +- Fix `kubectl drain ignore` daemonsets and others. ([#87361](https://github.com/kubernetes/kubernetes/pull/87361), [@zhouya0](https://github.com/zhouya0)) [SIG CLI] +- Fix missing "apiVersion" for "involvedObject" in Events for Nodes. ([#87537](https://github.com/kubernetes/kubernetes/pull/87537), [@uthark](https://github.com/uthark)) [SIG Apps and Node] +- Fix nil pointer dereference in azure cloud provider ([#85975](https://github.com/kubernetes/kubernetes/pull/85975), [@ldx](https://github.com/ldx)) [SIG Cloud Provider] +- Fix regression in statefulset conversion which prevents applying a statefulset multiple times. ([#87706](https://github.com/kubernetes/kubernetes/pull/87706), [@liggitt](https://github.com/liggitt)) [SIG Apps and Testing] +- Fix route conflicted operations when updating multiple routes together ([#88209](https://github.com/kubernetes/kubernetes/pull/88209), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Fix that prevents repeated fetching of PVC/PV objects by kubelet when processing of pod volumes fails. While this prevents hammering API server in these error scenarios, it means that some errors in processing volume(s) for a pod could now take up to 2-3 minutes before retry. ([#88141](https://github.com/kubernetes/kubernetes/pull/88141), [@tedyu](https://github.com/tedyu)) [SIG Node and Storage] +- Fix the bug PIP's DNS is deleted if no DNS label service annotation isn't set. ([#87246](https://github.com/kubernetes/kubernetes/pull/87246), [@nilo19](https://github.com/nilo19)) [SIG Cloud Provider] +- Fix control plane hosts rolling upgrade causing thundering herd of LISTs on etcd leading to control plane unavailability. ([#86430](https://github.com/kubernetes/kubernetes/pull/86430), [@wojtek-t](https://github.com/wojtek-t)) [SIG API Machinery, Node and Testing] +- Fix: add azure disk migration support for CSINode ([#88014](https://github.com/kubernetes/kubernetes/pull/88014), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage] +- Fix: add non-retriable errors in azure clients ([#87941](https://github.com/kubernetes/kubernetes/pull/87941), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider] +- Fix: add remediation in azure disk attach/detach ([#88444](https://github.com/kubernetes/kubernetes/pull/88444), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider] +- Fix: azure data disk should use same key as os disk by default ([#86351](https://github.com/kubernetes/kubernetes/pull/86351), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider] +- Fix: azure disk could not mounted on Standard_DC4s/DC2s instances ([#86612](https://github.com/kubernetes/kubernetes/pull/86612), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage] +- Fix: azure file mount timeout issue ([#88610](https://github.com/kubernetes/kubernetes/pull/88610), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage] +- Fix: check disk status before disk azure disk ([#88360](https://github.com/kubernetes/kubernetes/pull/88360), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider] +- Fix: corrupted mount point in csi driver ([#88569](https://github.com/kubernetes/kubernetes/pull/88569), [@andyzhangx](https://github.com/andyzhangx)) [SIG Storage] +- Fix: get azure disk lun timeout issue ([#88158](https://github.com/kubernetes/kubernetes/pull/88158), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage] +- Fix: update azure disk max count ([#88201](https://github.com/kubernetes/kubernetes/pull/88201), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage] +- Fixed "requested device X but found Y" attach error on AWS. ([#85675](https://github.com/kubernetes/kubernetes/pull/85675), [@jsafrane](https://github.com/jsafrane)) [SIG Cloud Provider and Storage] +- Fixed NetworkPolicy validation that `Except` values are accepted when they are outside the CIDR range. ([#86578](https://github.com/kubernetes/kubernetes/pull/86578), [@tnqn](https://github.com/tnqn)) [SIG Network] +- Fixed a bug in the TopologyManager. Previously, the TopologyManager would only guarantee alignment if container creation was serialized in some way. Alignment is now guaranteed under all scenarios of container creation. ([#87759](https://github.com/kubernetes/kubernetes/pull/87759), [@klueska](https://github.com/klueska)) [SIG Node] +- Fixed a bug which could prevent a provider ID from ever being set for node if an error occurred determining the provider ID when the node was added. ([#87043](https://github.com/kubernetes/kubernetes/pull/87043), [@zjs](https://github.com/zjs)) [SIG Apps and Cloud Provider] +- Fixed a data race in the kubelet image manager that can cause static pod workers to silently stop working. ([#88915](https://github.com/kubernetes/kubernetes/pull/88915), [@roycaihw](https://github.com/roycaihw)) [SIG Node] +- Fixed a panic in the kubelet cleaning up pod volumes ([#86277](https://github.com/kubernetes/kubernetes/pull/86277), [@tedyu](https://github.com/tedyu)) [SIG Storage] +- Fixed a regression where the kubelet would fail to update the ready status of pods. ([#84951](https://github.com/kubernetes/kubernetes/pull/84951), [@tedyu](https://github.com/tedyu)) [SIG Node] +- Fixed an issue that could cause the kubelet to incorrectly run concurrent pod reconciliation loops and crash. ([#89055](https://github.com/kubernetes/kubernetes/pull/89055), [@tedyu](https://github.com/tedyu)) [SIG Node] +- Fixed block CSI volume cleanup after timeouts. ([#88660](https://github.com/kubernetes/kubernetes/pull/88660), [@jsafrane](https://github.com/jsafrane)) [SIG Storage] +- Fixed cleaning of CSI raw block volumes. ([#87978](https://github.com/kubernetes/kubernetes/pull/87978), [@jsafrane](https://github.com/jsafrane)) [SIG Storage] +- Fixed AWS Cloud Provider attempting to delete LoadBalancer security group it didn’t provision, and fixed AWS Cloud Provider creating a default LoadBalancer security group even if annotation `service.beta.kubernetes.io/aws-load-balancer-security-groups` is present because the intended behavior of aws-load-balancer-security-groups is to replace all security groups assigned to the load balancer. ([#84265](https://github.com/kubernetes/kubernetes/pull/84265), [@bhagwat070919](https://github.com/bhagwat070919)) [SIG Cloud Provider] +- Fixed two scheduler metrics (pending_pods and schedule_attempts_total) not being recorded ([#87692](https://github.com/kubernetes/kubernetes/pull/87692), [@everpeace](https://github.com/everpeace)) [SIG Scheduling] +- Fixes an issue with kubelet-reported pod status on deleted/recreated pods. ([#86320](https://github.com/kubernetes/kubernetes/pull/86320), [@liggitt](https://github.com/liggitt)) [SIG Node] +- Fixes conversion error in multi-version custom resources that could cause metadata.generation to increment on no-op patches or updates of a custom resource. ([#88995](https://github.com/kubernetes/kubernetes/pull/88995), [@liggitt](https://github.com/liggitt)) [SIG API Machinery] +- Fixes issue where AAD token obtained by kubectl is incompatible with on-behalf-of flow and oidc. The audience claim before this fix has "spn:" prefix. After this fix, "spn:" prefix is omitted. ([#86412](https://github.com/kubernetes/kubernetes/pull/86412), [@weinong](https://github.com/weinong)) [SIG API Machinery, Auth and Cloud Provider] +- Fixes an issue where you can't attach more than 15 GCE Persistent Disks to c2, n2, m1, m2 machine types. ([#88602](https://github.com/kubernetes/kubernetes/pull/88602), [@yuga711](https://github.com/yuga711)) [SIG Storage] +- Fixes kube-proxy when EndpointSlice feature gate is enabled on Windows. ([#86016](https://github.com/kubernetes/kubernetes/pull/86016), [@robscott](https://github.com/robscott)) [SIG Auth and Network] +- Fixes kubelet crash in client certificate rotation cases ([#88079](https://github.com/kubernetes/kubernetes/pull/88079), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, Auth and Node] +- Fixes service account token admission error in clusters that do not run the service account token controller ([#87029](https://github.com/kubernetes/kubernetes/pull/87029), [@liggitt](https://github.com/liggitt)) [SIG Auth] +- Fixes v1.17.0 regression in --service-cluster-ip-range handling with IPv4 ranges larger than 65536 IP addresses ([#86534](https://github.com/kubernetes/kubernetes/pull/86534), [@liggitt](https://github.com/liggitt)) [SIG Network] +- Fixes wrong validation result of NetworkPolicy PolicyTypes ([#85747](https://github.com/kubernetes/kubernetes/pull/85747), [@tnqn](https://github.com/tnqn)) [SIG Network] +- For subprotocol negotiation, both client and server protocol is required now. ([#86646](https://github.com/kubernetes/kubernetes/pull/86646), [@tedyu](https://github.com/tedyu)) [SIG API Machinery and Node] +- For volumes that allow attaches across multiple nodes, attach and detach operations across different nodes are now executed in parallel. ([#88678](https://github.com/kubernetes/kubernetes/pull/88678), [@verult](https://github.com/verult)) [SIG Storage] +- Garbage collector now can correctly orphan ControllerRevisions when StatefulSets are deleted with orphan propagation policy. ([#84984](https://github.com/kubernetes/kubernetes/pull/84984), [@cofyc](https://github.com/cofyc)) [SIG Apps] +- `Get-kube.sh` uses the gcloud's current local GCP service account for auth when the provider is GCE or GKE instead of the metadata server default ([#88383](https://github.com/kubernetes/kubernetes/pull/88383), [@BenTheElder](https://github.com/BenTheElder)) [SIG Cluster Lifecycle] +- Golang/x/net has been updated to bring in fixes for CVE-2020-9283 ([#88381](https://github.com/kubernetes/kubernetes/pull/88381), [@BenTheElder](https://github.com/BenTheElder)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle and Instrumentation] +- If a serving certificate’s param specifies a name that is an IP for an SNI certificate, it will have priority for replying to server connections. ([#85308](https://github.com/kubernetes/kubernetes/pull/85308), [@deads2k](https://github.com/deads2k)) [SIG API Machinery] +- Improved yaml parsing performance ([#85458](https://github.com/kubernetes/kubernetes/pull/85458), [@cjcullen](https://github.com/cjcullen)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Node] +- Improves performance of the node authorizer ([#87696](https://github.com/kubernetes/kubernetes/pull/87696), [@liggitt](https://github.com/liggitt)) [SIG Auth] +- In GKE alpha clusters it will be possible to use the service annotation `cloud.google.com/network-tier: Standard` ([#88487](https://github.com/kubernetes/kubernetes/pull/88487), [@zioproto](https://github.com/zioproto)) [SIG Cloud Provider] +- Includes FSType when describing CSI persistent volumes. ([#85293](https://github.com/kubernetes/kubernetes/pull/85293), [@huffmanca](https://github.com/huffmanca)) [SIG CLI and Storage] +- Iptables/userspace proxy: improve performance by getting local addresses only once per sync loop, instead of for every external IP ([#85617](https://github.com/kubernetes/kubernetes/pull/85617), [@andrewsykim](https://github.com/andrewsykim)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Network] +- Kube-aggregator: always sets unavailableGauge metric to reflect the current state of a service. ([#87778](https://github.com/kubernetes/kubernetes/pull/87778), [@p0lyn0mial](https://github.com/p0lyn0mial)) [SIG API Machinery] +- Kube-apiserver: fixed a conflict error encountered attempting to delete a pod with gracePeriodSeconds=0 and a resourceVersion precondition ([#85516](https://github.com/kubernetes/kubernetes/pull/85516), [@michaelgugino](https://github.com/michaelgugino)) [SIG API Machinery] +- Kube-proxy no longer modifies shared EndpointSlices. ([#86092](https://github.com/kubernetes/kubernetes/pull/86092), [@robscott](https://github.com/robscott)) [SIG Network] +- Kube-proxy: on dual-stack mode, if it is not able to get the IP Family of an endpoint, logs it with level InfoV(4) instead of Warning, avoiding flooding the logs for endpoints without addresses ([#88934](https://github.com/kubernetes/kubernetes/pull/88934), [@aojea](https://github.com/aojea)) [SIG Network] +- Kubeadm allows to configure single-stack clusters if dual-stack is enabled ([#87453](https://github.com/kubernetes/kubernetes/pull/87453), [@aojea](https://github.com/aojea)) [SIG API Machinery, Cluster Lifecycle and Network] +- Kubeadm now includes CoreDNS version 1.6.7 ([#86260](https://github.com/kubernetes/kubernetes/pull/86260), [@rajansandeep](https://github.com/rajansandeep)) [SIG Cluster Lifecycle] +- Kubeadm upgrades always persist the etcd backup for stacked ([#86861](https://github.com/kubernetes/kubernetes/pull/86861), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle] +- Kubeadm: 'kubeadm alpha kubelet config download' has been removed, please use 'kubeadm upgrade node phase kubelet-config' instead ([#87944](https://github.com/kubernetes/kubernetes/pull/87944), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle] +- Kubeadm: Forward cluster name to the controller-manager arguments ([#85817](https://github.com/kubernetes/kubernetes/pull/85817), [@ereslibre](https://github.com/ereslibre)) [SIG Cluster Lifecycle] +- Kubeadm: add support for the "ci/k8s-master" version label as a replacement for "ci-cross/*", which no longer exists. ([#86609](https://github.com/kubernetes/kubernetes/pull/86609), [@Pensu](https://github.com/Pensu)) [SIG Cluster Lifecycle] +- Kubeadm: apply further improvements to the tentative support for concurrent etcd member join. Fixes a bug where multiple members can receive the same hostname. Increase the etcd client dial timeout and retry timeout for add/remove/... operations. ([#87505](https://github.com/kubernetes/kubernetes/pull/87505), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle] +- Kubeadm: don't write the kubelet environment file on "upgrade apply" ([#85412](https://github.com/kubernetes/kubernetes/pull/85412), [@boluisa](https://github.com/boluisa)) [SIG Cluster Lifecycle] +- Kubeadm: fix potential panic when executing "kubeadm reset" with a corrupted kubelet.conf file ([#86216](https://github.com/kubernetes/kubernetes/pull/86216), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle] +- Kubeadm: fix the bug that 'kubeadm upgrade' hangs in single node cluster ([#88434](https://github.com/kubernetes/kubernetes/pull/88434), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle] +- Kubeadm: make sure images are pre-pulled even if a tag did not change but their contents changed ([#85603](https://github.com/kubernetes/kubernetes/pull/85603), [@bart0sh](https://github.com/bart0sh)) [SIG Cluster Lifecycle] +- Kubeadm: remove 'kubeadm upgrade node config' command since it was deprecated in v1.15, please use 'kubeadm upgrade node phase kubelet-config' instead ([#87975](https://github.com/kubernetes/kubernetes/pull/87975), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle] +- Kubeadm: remove the deprecated CoreDNS feature-gate. It was set to "true" since v1.11 when the feature went GA. In v1.13 it was marked as deprecated and hidden from the CLI. ([#87400](https://github.com/kubernetes/kubernetes/pull/87400), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle] +- Kubeadm: retry `kubeadm-config` ConfigMap creation or mutation if the apiserver is not responding. This will improve resiliency when joining new control plane nodes. ([#85763](https://github.com/kubernetes/kubernetes/pull/85763), [@ereslibre](https://github.com/ereslibre)) [SIG Cluster Lifecycle] +- Kubeadm: tolerate whitespace when validating certificate authority PEM data in kubeconfig files ([#86705](https://github.com/kubernetes/kubernetes/pull/86705), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle] +- Kubeadm: use bind-address option to configure the kube-controller-manager and kube-scheduler http probes ([#86493](https://github.com/kubernetes/kubernetes/pull/86493), [@aojea](https://github.com/aojea)) [SIG Cluster Lifecycle] +- Kubeadm: uses the api-server AdvertiseAddress IP family to choose the etcd endpoint IP family for non external etcd clusters ([#85745](https://github.com/kubernetes/kubernetes/pull/85745), [@aojea](https://github.com/aojea)) [SIG Cluster Lifecycle] +- Kubectl cluster-info dump --output-directory=xxx now generates files with an extension depending on the output format. ([#82070](https://github.com/kubernetes/kubernetes/pull/82070), [@olivierlemasle](https://github.com/olivierlemasle)) [SIG CLI] +- `Kubectl describe ` and `kubectl top pod` will return a message saying `"No resources found"` or `"No resources found in namespace"` if there are no results to display. ([#87527](https://github.com/kubernetes/kubernetes/pull/87527), [@brianpursley](https://github.com/brianpursley)) [SIG CLI] +- `Kubectl drain node --dry-run` will list pods that would be evicted or deleted ([#82660](https://github.com/kubernetes/kubernetes/pull/82660), [@sallyom](https://github.com/sallyom)) [SIG CLI] +- `Kubectl set resources` will no longer return an error if passed an empty change for a resource. `kubectl set subject` will no longer return an error if passed an empty change for a resource. ([#85490](https://github.com/kubernetes/kubernetes/pull/85490), [@sallyom](https://github.com/sallyom)) [SIG CLI] +- Kubelet metrics gathered through metrics-server or prometheus should no longer timeout for Windows nodes running more than 3 pods. ([#87730](https://github.com/kubernetes/kubernetes/pull/87730), [@marosset](https://github.com/marosset)) [SIG Node, Testing and Windows] +- Kubelet metrics have been changed to buckets. For example the `exec/{podNamespace}/{podID}/{containerName}` is now just exec. ([#87913](https://github.com/kubernetes/kubernetes/pull/87913), [@cheftako](https://github.com/cheftako)) [SIG Node] +- Kubelets perform fewer unnecessary pod status update operations on the API server. ([#88591](https://github.com/kubernetes/kubernetes/pull/88591), [@smarterclayton](https://github.com/smarterclayton)) [SIG Node and Scalability] +- Kubernetes will try to acquire the iptables lock every 100 msec during 5 seconds instead of every second. This is especially useful for environments using kube-proxy in iptables mode with a high churn rate of services. ([#85771](https://github.com/kubernetes/kubernetes/pull/85771), [@aojea](https://github.com/aojea)) [SIG Network] +- Limit number of instances in a single update to GCE target pool to 1000. ([#87881](https://github.com/kubernetes/kubernetes/pull/87881), [@wojtek-t](https://github.com/wojtek-t)) [SIG Cloud Provider, Network and Scalability] +- Make Azure clients only retry on specified HTTP status codes ([#88017](https://github.com/kubernetes/kubernetes/pull/88017), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Make error message and service event message more clear ([#86078](https://github.com/kubernetes/kubernetes/pull/86078), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Minimize AWS NLB health check timeout when externalTrafficPolicy set to Local ([#73363](https://github.com/kubernetes/kubernetes/pull/73363), [@kellycampbell](https://github.com/kellycampbell)) [SIG Cloud Provider] +- Pause image contains "Architecture" in non-amd64 images ([#87954](https://github.com/kubernetes/kubernetes/pull/87954), [@BenTheElder](https://github.com/BenTheElder)) [SIG Release] +- Pause image upgraded to 3.2 in kubelet and kubeadm. ([#88173](https://github.com/kubernetes/kubernetes/pull/88173), [@BenTheElder](https://github.com/BenTheElder)) [SIG CLI, Cluster Lifecycle, Node and Testing] +- Plugin/PluginConfig and Policy APIs are mutually exclusive when running the scheduler ([#88864](https://github.com/kubernetes/kubernetes/pull/88864), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling] +- Remove `FilteredNodesStatuses` argument from `PreScore`'s interface. ([#88189](https://github.com/kubernetes/kubernetes/pull/88189), [@skilxn-go](https://github.com/skilxn-go)) [SIG Scheduling and Testing] +- Resolved a performance issue in the node authorizer index maintenance. ([#87693](https://github.com/kubernetes/kubernetes/pull/87693), [@liggitt](https://github.com/liggitt)) [SIG Auth] +- Resolved regression in admission, authentication, and authorization webhook performance in v1.17.0-rc.1 ([#85810](https://github.com/kubernetes/kubernetes/pull/85810), [@liggitt](https://github.com/liggitt)) [SIG API Machinery and Testing] +- Resolves performance regression in `kubectl get all` and in client-go discovery clients constructed using `NewDiscoveryClientForConfig` or `NewDiscoveryClientForConfigOrDie`. ([#86168](https://github.com/kubernetes/kubernetes/pull/86168), [@liggitt](https://github.com/liggitt)) [SIG API Machinery] +- Reverted a kubectl azure auth module change where oidc claim spn: prefix was omitted resulting a breaking behavior with existing Azure AD OIDC enabled api-server ([#87507](https://github.com/kubernetes/kubernetes/pull/87507), [@weinong](https://github.com/weinong)) [SIG API Machinery, Auth and Cloud Provider] +- Shared informers are now more reliable in the face of network disruption. ([#86015](https://github.com/kubernetes/kubernetes/pull/86015), [@squeed](https://github.com/squeed)) [SIG API Machinery] +- Specifying PluginConfig for the same plugin more than once fails scheduler startup. + Specifying extenders and configuring .ignoredResources for the NodeResourcesFit plugin fails ([#88870](https://github.com/kubernetes/kubernetes/pull/88870), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling] +- Terminating a restartPolicy=Never pod no longer has a chance to report the pod succeeded when it actually failed. ([#88440](https://github.com/kubernetes/kubernetes/pull/88440), [@smarterclayton](https://github.com/smarterclayton)) [SIG Node and Testing] +- The CSR signing cert/key pairs will be reloaded from disk like the kube-apiserver cert/key pairs ([#86816](https://github.com/kubernetes/kubernetes/pull/86816), [@deads2k](https://github.com/deads2k)) [SIG API Machinery, Apps and Auth] +- The EventRecorder from k8s.io/client-go/tools/events will now create events in the default namespace (instead of kube-system) when the related object does not have it set. ([#88815](https://github.com/kubernetes/kubernetes/pull/88815), [@enj](https://github.com/enj)) [SIG API Machinery] +- The audit event sourceIPs list will now always end with the IP that sent the request directly to the API server. ([#87167](https://github.com/kubernetes/kubernetes/pull/87167), [@tallclair](https://github.com/tallclair)) [SIG API Machinery and Auth] +- The sample-apiserver aggregated conformance test has updated to use the Kubernetes v1.17.0 sample apiserver ([#84735](https://github.com/kubernetes/kubernetes/pull/84735), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, Architecture, CLI and Testing] +- To reduce chances of throttling, VM cache is set to nil when Azure node provisioning state is deleting ([#87635](https://github.com/kubernetes/kubernetes/pull/87635), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- VMSS cache is added so that less chances of VMSS GET throttling ([#85885](https://github.com/kubernetes/kubernetes/pull/85885), [@nilo19](https://github.com/nilo19)) [SIG Cloud Provider] +- Wait for kubelet & kube-proxy to be ready on Windows node within 10s ([#85228](https://github.com/kubernetes/kubernetes/pull/85228), [@YangLu1031](https://github.com/YangLu1031)) [SIG Cluster Lifecycle] +- `kubectl apply -f --prune -n ` should prune all resources not defined in the file in the cli specified namespace. ([#85613](https://github.com/kubernetes/kubernetes/pull/85613), [@MartinKaburu](https://github.com/MartinKaburu)) [SIG CLI] +- `kubectl create clusterrolebinding` creates rbac.authorization.k8s.io/v1 object ([#85889](https://github.com/kubernetes/kubernetes/pull/85889), [@oke-py](https://github.com/oke-py)) [SIG CLI] +- `kubectl diff` now returns 1 only on diff finding changes, and >1 on kubectl errors. The "exit status code 1" message has also been muted. ([#87437](https://github.com/kubernetes/kubernetes/pull/87437), [@apelisse](https://github.com/apelisse)) [SIG CLI and Testing] + +## Dependencies + +- Update Calico to v3.8.4 ([#84163](https://github.com/kubernetes/kubernetes/pull/84163), [@david-tigera](https://github.com/david-tigera))[SIG Cluster Lifecycle] +- Update aws-sdk-go dependency to v1.28.2 ([#87253](https://github.com/kubernetes/kubernetes/pull/87253), [@SaranBalaji90](https://github.com/SaranBalaji90))[SIG API Machinery and Cloud Provider] +- Update CNI version to v0.8.5 ([#78819](https://github.com/kubernetes/kubernetes/pull/78819), [@justaugustus](https://github.com/justaugustus))[SIG Release, Testing, Network, Cluster Lifecycle and API Machinery] +- Update cri-tools to v1.17.0 ([#86305](https://github.com/kubernetes/kubernetes/pull/86305), [@saschagrunert](https://github.com/saschagrunert))[SIG Release and Cluster Lifecycle] +- Pause image upgraded to 3.2 in kubelet and kubeadm ([#88173](https://github.com/kubernetes/kubernetes/pull/88173), [@BenTheElder](https://github.com/BenTheElder))[SIG CLI, Node, Testing and Cluster Lifecycle] +- Update CoreDNS version to 1.6.7 in kubeadm ([#86260](https://github.com/kubernetes/kubernetes/pull/86260), [@rajansandeep](https://github.com/rajansandeep))[SIG Cluster Lifecycle] +- Update golang.org/x/crypto to fix CVE-2020-9283 ([#8838](https://github.com/kubernetes/kubernetes/pull/88381), [@BenTheElder](https://github.com/BenTheElder))[SIG CLI, Instrumentation, API Machinery, CLuster Lifecycle and Cloud Provider] +- Update Go to 1.13.8 ([#87648](https://github.com/kubernetes/kubernetes/pull/87648), [@ialidzhikov](https://github.com/ialidzhikov))[SIG Release and Testing] +- Update Cluster-Autoscaler to 1.18.0 ([#89095](https://github.com/kubernetes/kubernetes/pull/89095), [@losipiuk](https://github.com/losipiuk))[SIG Autoscaling and Cluster Lifecycle] + + + +# v1.18.0-rc.1 + +[Documentation](https://docs.k8s.io) + +## Downloads for v1.18.0-rc.1 + +filename | sha512 hash +-------- | ----------- +[kubernetes.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes.tar.gz) | `c17231d5de2e0677e8af8259baa11a388625821c79b86362049f2edb366404d6f4b4587b8f13ccbceeb2f32c6a9fe98607f779c0f3e1caec438f002e3a2c8c21` +[kubernetes-src.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-src.tar.gz) | `e84ffad57c301f5d6e90f916b996d5abb0c987928c3ca6b1565f7b042588f839b994ca12c43fc36f0ffb63f9fabc15110eb08be253b8939f49cd951e956da618` + +### Client Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-client-darwin-386.tar.gz) | `1aea99923d492436b3eb91aaecffac94e5d0aa2b38a0930d266fda85c665bbc4569745c409aa302247df3b578ce60324e7a489eb26240e97d4e65a67428ea3d1` +[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-client-darwin-amd64.tar.gz) | `07fa7340a959740bd52b83ff44438bbd988e235277dad1e43f125f08ac85230a24a3b755f4e4c8645743444fa2b66a3602fc445d7da6d2fc3770e8c21ba24b33` +[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-client-linux-386.tar.gz) | `48cebd26448fdd47aa36257baa4c716a98fda055bbf6a05230f2a3fe3c1b99b4e483668661415392190f3eebb9cb6e15c784626b48bb2541d93a37902f0e3974` +[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-client-linux-amd64.tar.gz) | `c3a5fedf263f07a07f59c01fea6c63c1e0b76ee8dc67c45b6c134255c28ed69171ccc2f91b6a45d6a8ec5570a0a7562e24c33b9d7b0d1a864f4dc04b178b3c04` +[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-client-linux-arm.tar.gz) | `a6b11a55bd38583bbaac14931a6862f8ce6493afe30947ba29e5556654a571593358278df59412bbeb6888fa127e9ae4c0047a9d46cb59394995010796df6b14` +[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-client-linux-arm64.tar.gz) | `9e15331ac8010154a9b64f5488969fc8ee2f21059639896cb84c5cf4f05f4c9d1d8970cb6f9831de6b34013848227c1972c12a698d07aac1ecc056e972fe6f79` +[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-client-linux-ppc64le.tar.gz) | `f828fe6252678de9d4822e482f5873309ae9139b2db87298ab3273ce45d38aa07b6b9b42b76c140705f27ba71e101d58b43e59ac7259d7c08dc647ea809e207c` +[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-client-linux-s390x.tar.gz) | `19da4b45f0666c063934af616f3e7ed3caa99d4ee1e46d53efadc7a8a4d38e43a36ced7249acd7ad3dcc4b4f60d8451b4f7ec7727e478ee2fadd14d353228bce` +[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-client-windows-386.tar.gz) | `775c9afb6cb3e7c4ba53e9f48a5df2cf207234a33059bd74448bc9f177dd120fb3f9c58ab45048a566326acc43bc8a67e886e10ef99f20780c8f63bb17426ebd` +[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-client-windows-amd64.tar.gz) | `208d2595a5b57ac97aac75b4a2a6130f0c937f781a030bde1a432daf4bc51f2fa523fca2eb84c38798489c4b536ee90aad22f7be8477985d9691d51ad8e1c4dc` + +### Server Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-server-linux-amd64.tar.gz) | `dcf832eae04f9f52ff473754ef5cfe697b35f4dc1a282622c94fa10943c8c35f4a8777a0c58c7de871c3c428c8973bf72d6bcd8751416d4c682125268b8fcefe` +[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-server-linux-arm.tar.gz) | `a04e34bea28eb1c8b492e8b1dd3c0dd87ebee71a7dbbef72be10a335e553361af7e48296e504f9844496b04e66350871114d20cfac3f3b49550d8be60f324ba3` +[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-server-linux-arm64.tar.gz) | `a6af086b07a8c2e498f32b43e6511bf6a5e6baf358c572c6910c8df17cd6cae94f562f459714fcead1595767cb14c7f639c5735f1411173bbd38d5604c082a77` +[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-server-linux-ppc64le.tar.gz) | `5a960ef5ba0c255f587f2ac0b028cd03136dc91e4efc5d1becab46417852e5524d18572b6f66259531ec6fea997da3c4d162ac153a9439672154375053fec6c7` +[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-server-linux-s390x.tar.gz) | `0f32c7d9b14bc238b9a5764d8f00edc4d3bf36bcf06b340b81061424e6070768962425194a8c2025c3a7ffb97b1de551d3ad23d1591ae34dd4e3ba25ab364c33` + +### Node Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-node-linux-amd64.tar.gz) | `27d8955d535d14f3f4dca501fd27e4f06fad84c6da878ea5332a5c83b6955667f6f731bfacaf5a3a23c09f14caa400f9bee927a0f269f5374de7f79cd1919b3b` +[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-node-linux-arm.tar.gz) | `0d56eccad63ba608335988e90b377fe8ae978b177dc836cdb803a5c99d99e8f3399a666d9477ca9cfe5964944993e85c416aec10a99323e3246141efc0b1cc9e` +[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-node-linux-arm64.tar.gz) | `79bb9be66f9e892d866b28e5cc838245818edb9706981fab6ccbff493181b341c1fcf6fe5d2342120a112eb93af413f5ba191cfba1ab4c4a8b0546a5ad8ec220` +[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-node-linux-ppc64le.tar.gz) | `3e9e2c6f9a2747d828069511dce8b4034c773c2d122f005f4508e22518055c1e055268d9d86773bbd26fbd2d887d783f408142c6c2f56ab2f2365236fd4d2635` +[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-node-linux-s390x.tar.gz) | `4f96e018c336fa13bb6df6f7217fe46a2b5c47f806f786499c429604ccba2ebe558503ab2c72f63250aa25b61dae2d166e4b80ae10f6ab37d714f87c1dcf6691` +[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0-rc.1/kubernetes-node-windows-amd64.tar.gz) | `ab110d76d506746af345e5897ef4f6993d5f53ac818ba69a334f3641047351aa63bfb3582841a9afca51dd0baff8b9010077d9c8ec85d2d69e4172b8d4b338b0` + +## Changelog since v1.18.0-beta.2 + +## Changes by Kind + +### API Change + +- Removes ConfigMap as suggestion for IngressClass parameters ([#89093](https://github.com/kubernetes/kubernetes/pull/89093), [@robscott](https://github.com/robscott)) [SIG Network] + +### Other (Bug, Cleanup or Flake) + +- EndpointSlice should not contain endpoints for terminating pods ([#89056](https://github.com/kubernetes/kubernetes/pull/89056), [@andrewsykim](https://github.com/andrewsykim)) [SIG Apps and Network] +- Fix a bug where ExternalTrafficPolicy is not applied to service ExternalIPs. ([#88786](https://github.com/kubernetes/kubernetes/pull/88786), [@freehan](https://github.com/freehan)) [SIG Network] +- Fix invalid VMSS updates due to incorrect cache ([#89002](https://github.com/kubernetes/kubernetes/pull/89002), [@ArchangelSDY](https://github.com/ArchangelSDY)) [SIG Cloud Provider] +- Fix isCurrentInstance for Windows by removing the dependency of hostname. ([#89138](https://github.com/kubernetes/kubernetes/pull/89138), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Fixed a data race in kubelet image manager that can cause static pod workers to silently stop working. ([#88915](https://github.com/kubernetes/kubernetes/pull/88915), [@roycaihw](https://github.com/roycaihw)) [SIG Node] +- Fixed an issue that could cause the kubelet to incorrectly run concurrent pod reconciliation loops and crash. ([#89055](https://github.com/kubernetes/kubernetes/pull/89055), [@tedyu](https://github.com/tedyu)) [SIG Node] +- Kube-proxy: on dual-stack mode, if it is not able to get the IP Family of an endpoint, logs it with level InfoV(4) instead of Warning, avoiding flooding the logs for endpoints without addresses ([#88934](https://github.com/kubernetes/kubernetes/pull/88934), [@aojea](https://github.com/aojea)) [SIG Network] +- Update Cluster Autoscaler to 1.18.0; changelog: https://github.com/kubernetes/autoscaler/releases/tag/cluster-autoscaler-1.18.0 ([#89095](https://github.com/kubernetes/kubernetes/pull/89095), [@losipiuk](https://github.com/losipiuk)) [SIG Autoscaling and Cluster Lifecycle] + + +# v1.18.0-beta.2 + +[Documentation](https://docs.k8s.io) + +## Downloads for v1.18.0-beta.2 + +filename | sha512 hash +-------- | ----------- +[kubernetes.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes.tar.gz) | `3017430ca17f8a3523669b4a02c39cedfc6c48b07281bc0a67a9fbe9d76547b76f09529172cc01984765353a6134a43733b7315e0dff370bba2635dd2a6289af` +[kubernetes-src.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-src.tar.gz) | `c5fd60601380a99efff4458b1c9cf4dc02195f6f756b36e590e54dff68f7064daf32cf63980dddee13ef9dec7a60ad4eeb47a288083fdbbeeef4bc038384e9ea` + +### Client Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-client-darwin-386.tar.gz) | `7e49ede167b9271d4171e477fa21d267b2fb35f80869337d5b323198dc12f71b61441975bf925ad6e6cd7b61cbf6372d386417dc1e5c9b3c87ae651021c37237` +[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-client-darwin-amd64.tar.gz) | `3f5cdf0e85eee7d0773e0ae2df1c61329dea90e0da92b02dae1ffd101008dc4bade1c4951fc09f0cad306f0bcb7d16da8654334ddee43d5015913cc4ac8f3eda` +[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-client-linux-386.tar.gz) | `b67b41c11bfecb88017c33feee21735c56f24cf6f7851b63c752495fc0fb563cd417a67a81f46bca091f74dc00fca1f296e483d2e3dfe2004ea4b42e252d30b9` +[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-client-linux-amd64.tar.gz) | `1fef2197cb80003e3a5c26f05e889af9d85fbbc23e27747944d2997ace4bfa28f3670b13c08f5e26b7e274176b4e2df89c1162aebd8b9506e63b39b311b2d405` +[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-client-linux-arm.tar.gz) | `84e5f4d9776490219ee94a84adccd5dfc7c0362eb330709771afcde95ec83f03d96fe7399eec218e47af0a1e6445e24d95e6f9c66c0882ef8233a09ff2022420` +[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-client-linux-arm64.tar.gz) | `ba613b114e0cca32fa21a3d10f845aa2f215d3af54e775f917ff93919f7dd7075efe254e4047a85a1f4b817fc2bd78006c2e8873885f1208cbc02db99e2e2e25` +[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-client-linux-ppc64le.tar.gz) | `502a6938d8c4bbe04abbd19b59919d86765058ff72334848be4012cec493e0e7027c6cd950cf501367ac2026eea9f518110cb72d1c792322b396fc2f73d23217` +[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-client-linux-s390x.tar.gz) | `c24700e0ed2ef5c1d2dd282d638c88d90392ae90ea420837b39fd8e1cfc19525017325ccda71d8472fdaea174762208c09e1bba9bbc77c89deef6fac5e847ba2` +[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-client-windows-386.tar.gz) | `0d4c5a741b052f790c8b0923c9586ee9906225e51cf4dc8a56fc303d4d61bb5bf77fba9e65151dec7be854ff31da8fc2dcd3214563e1b4b9951e6af4aa643da4` +[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-client-windows-amd64.tar.gz) | `841ef2e306c0c9593f04d9528ee019bf3b667761227d9afc1d6ca8bf1aa5631dc25f5fe13ff329c4bf0c816b971fd0dec808f879721e0f3bf51ce49772b38010` + +### Server Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-server-linux-amd64.tar.gz) | `b373df2e6ef55215e712315a5508e85a39126bd81b7b93c6b6305238919a88c740077828a6f19bcd97141951048ef7a19806ef6b1c3e1772dbc45715c5fcb3af` +[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-server-linux-arm.tar.gz) | `b8103cb743c23076ce8dd7c2da01c8dd5a542fbac8480e82dc673139c8ee5ec4495ca33695e7a18dd36412cf1e18ed84c8de05042525ddd8e869fbdfa2766569` +[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-server-linux-arm64.tar.gz) | `8f8f05cf64fb9c8d80cdcb4935b2d3e3edc48bdd303231ae12f93e3f4d979237490744a11e24ba7f52dbb017ca321a8e31624dcffa391b8afda3d02078767fa0` +[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-server-linux-ppc64le.tar.gz) | `b313b911c46f2ec129537407af3f165f238e48caeb4b9e530783ffa3659304a544ed02bef8ece715c279373b9fb2c781bd4475560e02c4b98a6d79837bc81938` +[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-server-linux-s390x.tar.gz) | `a1b6b06571141f507b12e5ef98efb88f4b6b9aba924722b2a74f11278d29a2972ab8290608360151d124608e6e24da0eb3516d484cb5fa12ff2987562f15964a` + +### Node Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-node-linux-amd64.tar.gz) | `20e02ca327543cddb2568ead3d5de164cbfb2914ab6416106d906bf12fcfbc4e55b13bea4d6a515e8feab038e2c929d72c4d6909dfd7881ba69fd1e8c772ab99` +[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-node-linux-arm.tar.gz) | `ecd817ef05d6284f9c6592b84b0a48ea31cf4487030c9fb36518474b2a33dad11b9c852774682e60e4e8b074e6bea7016584ca281dddbe2994da5eaf909025c0` +[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-node-linux-arm64.tar.gz) | `0020d32b7908ffd5055c8b26a8b3033e4702f89efcfffe3f6fcdb8a9921fa8eaaed4193c85597c24afd8c523662454f233521bb7055841a54c182521217ccc9d` +[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-node-linux-ppc64le.tar.gz) | `e065411d66d486e7793449c1b2f5a412510b913bf7f4e728c0a20e275642b7668957050dc266952cdff09acc391369ae6ac5230184db89af6823ba400745f2fc` +[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-node-linux-s390x.tar.gz) | `082ee90413beaaea41d6cbe9a18f7d783a95852607f3b94190e0ca12aacdd97d87e233b87117871bfb7d0a4b6302fbc7688549492a9bc50a2f43a5452504d3ce` +[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0-beta.2/kubernetes-node-windows-amd64.tar.gz) | `fb5aca0cc36be703f9d4033eababd581bac5de8399c50594db087a99ed4cb56e4920e960eb81d0132d696d094729254eeda2a5c0cb6e65e3abca6c8d61da579e` + +## Changelog since v1.18.0-beta.1 + +## Urgent Upgrade Notes + +### (No, really, you MUST read this before you upgrade) + +- `kubectl` no longer defaults to `http://localhost:8080`. If you own one of these legacy clusters, you are *strongly- encouraged to secure your server. If you cannot secure your server, you can set `KUBERNETES_MASTER` if you were relying on that behavior and you're a client-go user. Set `--server`, `--kubeconfig` or `KUBECONFIG` to make it work in `kubectl`. ([#86173](https://github.com/kubernetes/kubernetes/pull/86173), [@soltysh](https://github.com/soltysh)) [SIG API Machinery, CLI and Testing] + +## Changes by Kind + +### Deprecation + +- AlgorithmSource is removed from v1alpha2 Scheduler ComponentConfig ([#87999](https://github.com/kubernetes/kubernetes/pull/87999), [@damemi](https://github.com/damemi)) [SIG Scheduling] +- Kube-proxy: deprecate `--healthz-port` and `--metrics-port` flag, please use `--healthz-bind-address` and `--metrics-bind-address` instead ([#88512](https://github.com/kubernetes/kubernetes/pull/88512), [@SataQiu](https://github.com/SataQiu)) [SIG Network] +- Kubeadm: deprecate the usage of the experimental flag '--use-api' under the 'kubeadm alpha certs renew' command. ([#88827](https://github.com/kubernetes/kubernetes/pull/88827), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle] + +### API Change + +- A new IngressClass resource has been added to enable better Ingress configuration. ([#88509](https://github.com/kubernetes/kubernetes/pull/88509), [@robscott](https://github.com/robscott)) [SIG API Machinery, Apps, CLI, Network, Node and Testing] +- Added GenericPVCDataSource feature gate to enable using arbitrary custom resources as the data source for a PVC. ([#88636](https://github.com/kubernetes/kubernetes/pull/88636), [@bswartz](https://github.com/bswartz)) [SIG Apps and Storage] +- Allow user to specify fsgroup permission change policy for pods ([#88488](https://github.com/kubernetes/kubernetes/pull/88488), [@gnufied](https://github.com/gnufied)) [SIG Apps and Storage] +- BlockVolume and CSIBlockVolume features are now GA. ([#88673](https://github.com/kubernetes/kubernetes/pull/88673), [@jsafrane](https://github.com/jsafrane)) [SIG Apps, Node and Storage] +- CustomResourceDefinition schemas that use `x-kubernetes-list-map-keys` to specify properties that uniquely identify list items must make those properties required or have a default value, to ensure those properties are present for all list items. See https://kubernetes.io/docs/reference/using-api/api-concepts/#merge-strategy for details. ([#88076](https://github.com/kubernetes/kubernetes/pull/88076), [@eloyekunle](https://github.com/eloyekunle)) [SIG API Machinery and Testing] +- Fixes a regression with clients prior to 1.15 not being able to update podIP in pod status, or podCIDR in node spec, against >= 1.16 API servers ([#88505](https://github.com/kubernetes/kubernetes/pull/88505), [@liggitt](https://github.com/liggitt)) [SIG Apps and Network] +- Ingress: Add Exact and Prefix maching to Ingress PathTypes ([#88587](https://github.com/kubernetes/kubernetes/pull/88587), [@cmluciano](https://github.com/cmluciano)) [SIG Apps, Cluster Lifecycle and Network] +- Ingress: Add alternate backends via TypedLocalObjectReference ([#88775](https://github.com/kubernetes/kubernetes/pull/88775), [@cmluciano](https://github.com/cmluciano)) [SIG Apps and Network] +- Ingress: allow wildcard hosts in IngressRule ([#88858](https://github.com/kubernetes/kubernetes/pull/88858), [@cmluciano](https://github.com/cmluciano)) [SIG Network] +- Kube-controller-manager and kube-scheduler expose profiling by default to match the kube-apiserver. Use `--enable-profiling=false` to disable. ([#88663](https://github.com/kubernetes/kubernetes/pull/88663), [@deads2k](https://github.com/deads2k)) [SIG API Machinery, Cloud Provider and Scheduling] +- Move TaintBasedEvictions feature gates to GA ([#87487](https://github.com/kubernetes/kubernetes/pull/87487), [@skilxn-go](https://github.com/skilxn-go)) [SIG API Machinery, Apps, Node, Scheduling and Testing] +- New flag --endpointslice-updates-batch-period in kube-controller-manager can be used to reduce number of endpointslice updates generated by pod changes. ([#88745](https://github.com/kubernetes/kubernetes/pull/88745), [@mborsz](https://github.com/mborsz)) [SIG API Machinery, Apps and Network] +- Scheduler Extenders can now be configured in the v1alpha2 component config ([#88768](https://github.com/kubernetes/kubernetes/pull/88768), [@damemi](https://github.com/damemi)) [SIG Release, Scheduling and Testing] +- The apiserver/v1alph1#EgressSelectorConfiguration API is now beta. ([#88502](https://github.com/kubernetes/kubernetes/pull/88502), [@caesarxuchao](https://github.com/caesarxuchao)) [SIG API Machinery] +- The storage.k8s.io/CSIDriver has moved to GA, and is now available for use. ([#84814](https://github.com/kubernetes/kubernetes/pull/84814), [@huffmanca](https://github.com/huffmanca)) [SIG API Machinery, Apps, Auth, Node, Scheduling, Storage and Testing] +- VolumePVCDataSource moves to GA in 1.18 release ([#88686](https://github.com/kubernetes/kubernetes/pull/88686), [@j-griffith](https://github.com/j-griffith)) [SIG Apps, CLI and Cluster Lifecycle] + +### Feature + +- Add `rest_client_rate_limiter_duration_seconds` metric to component-base to track client side rate limiter latency in seconds. Broken down by verb and URL. ([#88134](https://github.com/kubernetes/kubernetes/pull/88134), [@jennybuckley](https://github.com/jennybuckley)) [SIG API Machinery, Cluster Lifecycle and Instrumentation] +- Allow user to specify resource using --filename flag when invoking kubectl exec ([#88460](https://github.com/kubernetes/kubernetes/pull/88460), [@soltysh](https://github.com/soltysh)) [SIG CLI and Testing] +- Apiserver add a new flag --goaway-chance which is the fraction of requests that will be closed gracefully(GOAWAY) to prevent HTTP/2 clients from getting stuck on a single apiserver. + After the connection closed(received GOAWAY), the client's other in-flight requests won't be affected, and the client will reconnect. + The flag min value is 0 (off), max is .02 (1/50 requests); .001 (1/1000) is a recommended starting point. + Clusters with single apiservers, or which don't use a load balancer, should NOT enable this. ([#88567](https://github.com/kubernetes/kubernetes/pull/88567), [@answer1991](https://github.com/answer1991)) [SIG API Machinery] +- Azure: add support for single stack IPv6 ([#88448](https://github.com/kubernetes/kubernetes/pull/88448), [@aramase](https://github.com/aramase)) [SIG Cloud Provider] +- DefaultConstraints can be specified for the PodTopologySpread plugin in the component config ([#88671](https://github.com/kubernetes/kubernetes/pull/88671), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling] +- Kubeadm: support Windows specific kubelet flags in kubeadm-flags.env ([#88287](https://github.com/kubernetes/kubernetes/pull/88287), [@gab-satchi](https://github.com/gab-satchi)) [SIG Cluster Lifecycle and Windows] +- Kubectl cluster-info dump changed to only display a message telling you the location where the output was written when the output is not standard output. ([#88765](https://github.com/kubernetes/kubernetes/pull/88765), [@brianpursley](https://github.com/brianpursley)) [SIG CLI] +- Print NotReady when pod is not ready based on its conditions. ([#88240](https://github.com/kubernetes/kubernetes/pull/88240), [@soltysh](https://github.com/soltysh)) [SIG CLI] +- Scheduler Extender API is now located under k8s.io/kube-scheduler/extender ([#88540](https://github.com/kubernetes/kubernetes/pull/88540), [@damemi](https://github.com/damemi)) [SIG Release, Scheduling and Testing] +- Signatures on scale client methods have been modified to accept `context.Context` as a first argument. Signatures of Get, Update, and Patch methods have been updated to accept GetOptions, UpdateOptions and PatchOptions respectively. ([#88599](https://github.com/kubernetes/kubernetes/pull/88599), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG API Machinery, Apps, Autoscaling and CLI] +- Signatures on the dynamic client methods have been modified to accept `context.Context` as a first argument. Signatures of Delete and DeleteCollection methods now accept DeleteOptions by value instead of by reference. ([#88906](https://github.com/kubernetes/kubernetes/pull/88906), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, Apps, CLI, Cluster Lifecycle, Storage and Testing] +- Signatures on the metadata client methods have been modified to accept `context.Context` as a first argument. Signatures of Delete and DeleteCollection methods now accept DeleteOptions by value instead of by reference. ([#88910](https://github.com/kubernetes/kubernetes/pull/88910), [@liggitt](https://github.com/liggitt)) [SIG API Machinery, Apps and Testing] +- Webhooks will have alpha support for network proxy ([#85870](https://github.com/kubernetes/kubernetes/pull/85870), [@Jefftree](https://github.com/Jefftree)) [SIG API Machinery, Auth and Testing] +- When client certificate files are provided, reload files for new connections, and close connections when a certificate changes. ([#79083](https://github.com/kubernetes/kubernetes/pull/79083), [@jackkleeman](https://github.com/jackkleeman)) [SIG API Machinery, Auth, Node and Testing] +- When deleting objects using kubectl with the --force flag, you are no longer required to also specify --grace-period=0. ([#87776](https://github.com/kubernetes/kubernetes/pull/87776), [@brianpursley](https://github.com/brianpursley)) [SIG CLI] +- `kubectl` now contains a `kubectl alpha debug` command. This command allows attaching an ephemeral container to a running pod for the purposes of debugging. ([#88004](https://github.com/kubernetes/kubernetes/pull/88004), [@verb](https://github.com/verb)) [SIG CLI] + +### Documentation + +- Update Japanese translation for kubectl help ([#86837](https://github.com/kubernetes/kubernetes/pull/86837), [@inductor](https://github.com/inductor)) [SIG CLI and Docs] +- `kubectl plugin` now prints a note how to install krew ([#88577](https://github.com/kubernetes/kubernetes/pull/88577), [@corneliusweig](https://github.com/corneliusweig)) [SIG CLI] + +### Other (Bug, Cleanup or Flake) + +- Azure VMSS LoadBalancerBackendAddressPools updating has been improved with squential-sync + concurrent-async requests. ([#88699](https://github.com/kubernetes/kubernetes/pull/88699), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- AzureFile and CephFS use new Mount library that prevents logging of sensitive mount options. ([#88684](https://github.com/kubernetes/kubernetes/pull/88684), [@saad-ali](https://github.com/saad-ali)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Storage] +- Build: Enable kube-cross image-building on K8s Infra ([#88562](https://github.com/kubernetes/kubernetes/pull/88562), [@justaugustus](https://github.com/justaugustus)) [SIG Release and Testing] +- Client-go certificate manager rotation gained the ability to preserve optional intermediate chains accompanying issued certificates ([#88744](https://github.com/kubernetes/kubernetes/pull/88744), [@jackkleeman](https://github.com/jackkleeman)) [SIG API Machinery and Auth] +- Conformance image now depends on stretch-slim instead of debian-hyperkube-base as that image is being deprecated and removed. ([#88702](https://github.com/kubernetes/kubernetes/pull/88702), [@dims](https://github.com/dims)) [SIG Cluster Lifecycle, Release and Testing] +- Deprecate --generator flag from kubectl create commands ([#88655](https://github.com/kubernetes/kubernetes/pull/88655), [@soltysh](https://github.com/soltysh)) [SIG CLI] +- FIX: prevent apiserver from panicking when failing to load audit webhook config file ([#88879](https://github.com/kubernetes/kubernetes/pull/88879), [@JoshVanL](https://github.com/JoshVanL)) [SIG API Machinery and Auth] +- Fix /readyz to return error immediately after a shutdown is initiated, before the --shutdown-delay-duration has elapsed. ([#88911](https://github.com/kubernetes/kubernetes/pull/88911), [@tkashem](https://github.com/tkashem)) [SIG API Machinery] +- Fix a bug where kubenet fails to parse the tc output. ([#83572](https://github.com/kubernetes/kubernetes/pull/83572), [@chendotjs](https://github.com/chendotjs)) [SIG Network] +- Fix describe ingress annotations not sorted. ([#88394](https://github.com/kubernetes/kubernetes/pull/88394), [@zhouya0](https://github.com/zhouya0)) [SIG CLI] +- Fix handling of aws-load-balancer-security-groups annotation. Security-Groups assigned with this annotation are no longer modified by kubernetes which is the expected behaviour of most users. Also no unnecessary Security-Groups are created anymore if this annotation is used. ([#83446](https://github.com/kubernetes/kubernetes/pull/83446), [@Elias481](https://github.com/Elias481)) [SIG Cloud Provider] +- Fix kubectl create deployment image name ([#86636](https://github.com/kubernetes/kubernetes/pull/86636), [@zhouya0](https://github.com/zhouya0)) [SIG CLI] +- Fix missing "apiVersion" for "involvedObject" in Events for Nodes. ([#87537](https://github.com/kubernetes/kubernetes/pull/87537), [@uthark](https://github.com/uthark)) [SIG Apps and Node] +- Fix that prevents repeated fetching of PVC/PV objects by kubelet when processing of pod volumes fails. While this prevents hammering API server in these error scenarios, it means that some errors in processing volume(s) for a pod could now take up to 2-3 minutes before retry. ([#88141](https://github.com/kubernetes/kubernetes/pull/88141), [@tedyu](https://github.com/tedyu)) [SIG Node and Storage] +- Fix: azure file mount timeout issue ([#88610](https://github.com/kubernetes/kubernetes/pull/88610), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage] +- Fix: corrupted mount point in csi driver ([#88569](https://github.com/kubernetes/kubernetes/pull/88569), [@andyzhangx](https://github.com/andyzhangx)) [SIG Storage] +- Fixed a bug in the TopologyManager. Previously, the TopologyManager would only guarantee alignment if container creation was serialized in some way. Alignment is now guaranteed under all scenarios of container creation. ([#87759](https://github.com/kubernetes/kubernetes/pull/87759), [@klueska](https://github.com/klueska)) [SIG Node] +- Fixed block CSI volume cleanup after timeouts. ([#88660](https://github.com/kubernetes/kubernetes/pull/88660), [@jsafrane](https://github.com/jsafrane)) [SIG Node and Storage] +- Fixes issue where you can't attach more than 15 GCE Persistent Disks to c2, n2, m1, m2 machine types. ([#88602](https://github.com/kubernetes/kubernetes/pull/88602), [@yuga711](https://github.com/yuga711)) [SIG Storage] +- For volumes that allow attaches across multiple nodes, attach and detach operations across different nodes are now executed in parallel. ([#88678](https://github.com/kubernetes/kubernetes/pull/88678), [@verult](https://github.com/verult)) [SIG Apps, Node and Storage] +- Hide kubectl.kubernetes.io/last-applied-configuration in describe command ([#88758](https://github.com/kubernetes/kubernetes/pull/88758), [@soltysh](https://github.com/soltysh)) [SIG Auth and CLI] +- In GKE alpha clusters it will be possible to use the service annotation `cloud.google.com/network-tier: Standard` ([#88487](https://github.com/kubernetes/kubernetes/pull/88487), [@zioproto](https://github.com/zioproto)) [SIG Cloud Provider] +- Kubelets perform fewer unnecessary pod status update operations on the API server. ([#88591](https://github.com/kubernetes/kubernetes/pull/88591), [@smarterclayton](https://github.com/smarterclayton)) [SIG Node and Scalability] +- Plugin/PluginConfig and Policy APIs are mutually exclusive when running the scheduler ([#88864](https://github.com/kubernetes/kubernetes/pull/88864), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling] +- Specifying PluginConfig for the same plugin more than once fails scheduler startup. + + Specifying extenders and configuring .ignoredResources for the NodeResourcesFit plugin fails ([#88870](https://github.com/kubernetes/kubernetes/pull/88870), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling] +- Support TLS Server Name overrides in kubeconfig file and via --tls-server-name in kubectl ([#88769](https://github.com/kubernetes/kubernetes/pull/88769), [@deads2k](https://github.com/deads2k)) [SIG API Machinery, Auth and CLI] +- Terminating a restartPolicy=Never pod no longer has a chance to report the pod succeeded when it actually failed. ([#88440](https://github.com/kubernetes/kubernetes/pull/88440), [@smarterclayton](https://github.com/smarterclayton)) [SIG Node and Testing] +- The EventRecorder from k8s.io/client-go/tools/events will now create events in the default namespace (instead of kube-system) when the related object does not have it set. ([#88815](https://github.com/kubernetes/kubernetes/pull/88815), [@enj](https://github.com/enj)) [SIG API Machinery] +- The audit event sourceIPs list will now always end with the IP that sent the request directly to the API server. ([#87167](https://github.com/kubernetes/kubernetes/pull/87167), [@tallclair](https://github.com/tallclair)) [SIG API Machinery and Auth] +- Update to use golang 1.13.8 ([#87648](https://github.com/kubernetes/kubernetes/pull/87648), [@ialidzhikov](https://github.com/ialidzhikov)) [SIG Release and Testing] +- Validate kube-proxy flags --ipvs-tcp-timeout, --ipvs-tcpfin-timeout, --ipvs-udp-timeout ([#88657](https://github.com/kubernetes/kubernetes/pull/88657), [@chendotjs](https://github.com/chendotjs)) [SIG Network] + + +# v1.18.0-beta.1 + +[Documentation](https://docs.k8s.io) + +## Downloads for v1.18.0-beta.1 + +filename | sha512 hash +-------- | ----------- +[kubernetes.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes.tar.gz) | `7c182ca905b3a31871c01ab5fdaf46f074547536c7975e069ff230af0d402dfc0346958b1d084bd2c108582ffc407484e6a15a1cd93e9affbe34b6e99409ef1f` +[kubernetes-src.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-src.tar.gz) | `d104b8c792b1517bd730787678c71c8ee3b259de81449192a49a1c6e37a6576d28f69b05c2019cc4a4c40ddeb4d60b80138323df3f85db8682caabf28e67c2de` + +### Client Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-client-darwin-386.tar.gz) | `bc337bb8f200a789be4b97ce99b9d7be78d35ebd64746307c28339dc4628f56d9903e0818c0888aaa9364357a528d1ac6fd34f74377000f292ec502fbea3837e` +[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-client-darwin-amd64.tar.gz) | `38dfa5e0b0cfff39942c913a6bcb2ad8868ec43457d35cffba08217bb6e7531720e0731f8588505f4c81193ce5ec0e5fe6870031cf1403fbbde193acf7e53540` +[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-client-linux-386.tar.gz) | `8e63ec7ce29c69241120c037372c6c779e3f16253eabd612c7cbe6aa89326f5160eb5798004d723c5cd72d458811e98dac3574842eb6a57b2798ecd2bbe5bcf9` +[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-client-linux-amd64.tar.gz) | `c1be9f184a7c3f896a785c41cd6ece9d90d8cb9b1f6088bdfb5557d8856c55e455f6688f5f54c2114396d5ae7adc0361e34ebf8e9c498d0187bd785646ccc1d0` +[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-client-linux-arm.tar.gz) | `8eab02453cfd9e847632a774a0e0cf3a33c7619fb4ced7f1840e1f71444e8719b1c8e8cbfdd1f20bb909f3abe39cdcac74f14cb9c878c656d35871b7c37c7cbe` +[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-client-linux-arm64.tar.gz) | `f7df0ec02d2e7e63278d5386e8153cfe2b691b864f17b6452cc824a5f328d688976c975b076e60f1c6b3c859e93e477134fbccc53bb49d9e846fb038b34eee48` +[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-client-linux-ppc64le.tar.gz) | `36dd5b10addca678a518e6d052c9d6edf473e3f87388a2f03f714c93c5fbfe99ace16cf3b382a531be20a8fe6f4160f8d891800dd2cff5f23c9ca12c2f4a151b` +[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-client-linux-s390x.tar.gz) | `5bdbb44b996ab4ccf3a383780270f5cfdbf174982c300723c8bddf0a48ae5e459476031c1d51b9d30ffd621d0a126c18a5de132ef1d92fca2f3e477665ea10cc` +[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-client-windows-386.tar.gz) | `5dea3d4c4e91ef889850143b361974250e99a3c526f5efee23ff9ccdcd2ceca4a2247e7c4f236bdfa77d2150157da5d676ac9c3ba26cf3a2f1e06d8827556f77` +[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-client-windows-amd64.tar.gz) | `db298e698391368703e6aea7f4345aec5a4b8c69f9d8ff6c99fb5804a6cea16d295fb01e70fe943ade3d4ce9200a081ad40da21bd331317ec9213f69b4d6c48f` + +### Server Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-server-linux-amd64.tar.gz) | `c6284929dd5940e750b48db72ffbc09f73c5ec31ab3db283babb8e4e07cd8cbb27642f592009caae4717981c0db82c16312849ef4cbafe76acc4264c7d5864ac` +[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-server-linux-arm.tar.gz) | `6fc9552cf082c54cc0833b19876117c87ba7feb5a12c7e57f71b52208daf03eaef3ca56bd22b7bce2d6e81b5a23537cf6f5497a6eaa356c0aab1d3de26c309f9` +[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-server-linux-arm64.tar.gz) | `b794b9c399e548949b5bfb2fe71123e86c2034847b2c99aca34b6de718a35355bbecdae9dc2a81c49e3c82fb4b5862526a3f63c2862b438895e12c5ea884f22e` +[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-server-linux-ppc64le.tar.gz) | `fddaed7a54f97046a91c29534645811c6346e973e22950b2607b8c119c2377e9ec2d32144f81626078cdaeca673129cc4016c1a3dbd3d43674aa777089fb56ac` +[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-server-linux-s390x.tar.gz) | `65951a534bb55069c7419f41cbcdfe2fae31541d8a3f9eca11fc2489addf281c5ad2d13719212657da0be5b898f22b57ac39446d99072872fbacb0a7d59a4f74` + +### Node Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-node-linux-amd64.tar.gz) | `992059efb5cae7ed0ef55820368d854bad1c6d13a70366162cd3b5111ce24c371c7c87ded2012f055e08b2ff1b4ef506e1f4e065daa3ac474fef50b5efa4fb07` +[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-node-linux-arm.tar.gz) | `c63ae0f8add5821ad267774314b8c8c1ffe3b785872bf278e721fd5dfdad1a5db1d4db3720bea0a36bf10d9c6dd93e247560162c0eac6e1b743246f587d3b27a` +[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-node-linux-arm64.tar.gz) | `47adb9ddf6eaf8f475b89f59ee16fbd5df183149a11ad1574eaa645b47a6d58aec2ca70ba857ce9f1a5793d44cf7a61ebc6874793bb685edaf19410f4f76fd13` +[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-node-linux-ppc64le.tar.gz) | `a3bc4a165567c7b76a3e45ab7b102d6eb3ecf373eb048173f921a4964cf9be8891d0d5b8dafbd88c3af7b0e21ef3d41c1e540c3347ddd84b929b3a3d02ceb7b2` +[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-node-linux-s390x.tar.gz) | `109ddf37c748f69584c829db57107c3518defe005c11fcd2a1471845c15aae0a3c89aafdd734229f4069ed18856cc650c80436684e1bdc43cfee3149b0324746` +[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0-beta.1/kubernetes-node-windows-amd64.tar.gz) | `a3a75d2696ad3136476ad7d811e8eabaff5111b90e592695e651d6111f819ebf0165b8b7f5adc05afb5f7f01d1e5fb64876cb696e492feb20a477a5800382b7a` + +## Changelog since v1.18.0-beta.0 + +## Urgent Upgrade Notes + +### (No, really, you MUST read this before you upgrade) + +- The StreamingProxyRedirects feature and `--redirect-container-streaming` flag are deprecated, and will be removed in a future release. The default behavior (proxy streaming requests through the kubelet) will be the only supported option. + If you are setting `--redirect-container-streaming=true`, then you must migrate off this configuration. The flag will no longer be able to be enabled starting in v1.20. If you are not setting the flag, no action is necessary. ([#88290](https://github.com/kubernetes/kubernetes/pull/88290), [@tallclair](https://github.com/tallclair)) [SIG API Machinery and Node] + +- Yes. + + Feature Name: Support using network resources (VNet, LB, IP, etc.) in different AAD Tenant and Subscription than those for the cluster. + + Changes in Pull Request: + + 1. Add properties `networkResourceTenantID` and `networkResourceSubscriptionID` in cloud provider auth config section, which indicates the location of network resources. + 2. Add function `GetMultiTenantServicePrincipalToken` to fetch multi-tenant service principal token, which will be used by Azure VM/VMSS Clients in this feature. + 3. Add function `GetNetworkResourceServicePrincipalToken` to fetch network resource service principal token, which will be used by Azure Network Resource (Load Balancer, Public IP, Route Table, Network Security Group and their sub level resources) Clients in this feature. + 4. Related unit tests. + + None. + + User Documentation: In PR https://github.com/kubernetes-sigs/cloud-provider-azure/pull/301 ([#88384](https://github.com/kubernetes/kubernetes/pull/88384), [@bowen5](https://github.com/bowen5)) [SIG Cloud Provider] + +## Changes by Kind + +### Deprecation + +- Azure service annotation service.beta.kubernetes.io/azure-load-balancer-disable-tcp-reset has been deprecated. Its support would be removed in a future release. ([#88462](https://github.com/kubernetes/kubernetes/pull/88462), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] + +### API Change + +- API additions to apiserver types ([#87179](https://github.com/kubernetes/kubernetes/pull/87179), [@Jefftree](https://github.com/Jefftree)) [SIG API Machinery, Cloud Provider and Cluster Lifecycle] +- Add Scheduling Profiles to kubescheduler.config.k8s.io/v1alpha2 ([#88087](https://github.com/kubernetes/kubernetes/pull/88087), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling and Testing] +- Added support for multiple sizes huge pages on a container level ([#84051](https://github.com/kubernetes/kubernetes/pull/84051), [@bart0sh](https://github.com/bart0sh)) [SIG Apps, Node and Storage] +- AppProtocol is a new field on Service and Endpoints resources, enabled with the ServiceAppProtocol feature gate. ([#88503](https://github.com/kubernetes/kubernetes/pull/88503), [@robscott](https://github.com/robscott)) [SIG Apps and Network] +- Fixed missing validation of uniqueness of list items in lists with `x-kubernetes-list-type: map` or x-kubernetes-list-type: set` in CustomResources. ([#84920](https://github.com/kubernetes/kubernetes/pull/84920), [@sttts](https://github.com/sttts)) [SIG API Machinery] +- Introduces optional --detect-local flag to kube-proxy. + Currently the only supported value is "cluster-cidr", + which is the default if not specified. ([#87748](https://github.com/kubernetes/kubernetes/pull/87748), [@satyasm](https://github.com/satyasm)) [SIG Cluster Lifecycle, Network and Scheduling] +- Kube-scheduler can run more than one scheduling profile. Given a pod, the profile is selected by using its `.spec.SchedulerName`. ([#88285](https://github.com/kubernetes/kubernetes/pull/88285), [@alculquicondor](https://github.com/alculquicondor)) [SIG Apps, Scheduling and Testing] +- Moving Windows RunAsUserName feature to GA ([#87790](https://github.com/kubernetes/kubernetes/pull/87790), [@marosset](https://github.com/marosset)) [SIG Apps and Windows] + +### Feature + +- Add --dry-run to kubectl delete, taint, replace ([#88292](https://github.com/kubernetes/kubernetes/pull/88292), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI and Testing] +- Add huge page stats to Allocated resources in "kubectl describe node" ([#80605](https://github.com/kubernetes/kubernetes/pull/80605), [@odinuge](https://github.com/odinuge)) [SIG CLI] +- Kubeadm: The ClusterStatus struct present in the kubeadm-config ConfigMap is deprecated and will be removed on a future version. It is going to be maintained by kubeadm until it gets removed. The same information can be found on `etcd` and `kube-apiserver` pod annotations, `kubeadm.kubernetes.io/etcd.advertise-client-urls` and `kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint` respectively. ([#87656](https://github.com/kubernetes/kubernetes/pull/87656), [@ereslibre](https://github.com/ereslibre)) [SIG Cluster Lifecycle] +- Kubeadm: add the experimental feature gate PublicKeysECDSA that can be used to create a + cluster with ECDSA certificates from "kubeadm init". Renewal of existing ECDSA certificates is + also supported using "kubeadm alpha certs renew", but not switching between the RSA and + ECDSA algorithms on the fly or during upgrades. ([#86953](https://github.com/kubernetes/kubernetes/pull/86953), [@rojkov](https://github.com/rojkov)) [SIG API Machinery, Auth and Cluster Lifecycle] +- Kubeadm: on kubeconfig certificate renewal, keep the embedded CA in sync with the one on disk ([#88052](https://github.com/kubernetes/kubernetes/pull/88052), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle] +- Kubeadm: upgrade supports fallback to the nearest known etcd version if an unknown k8s version is passed ([#88373](https://github.com/kubernetes/kubernetes/pull/88373), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle] +- New flag `--show-hidden-metrics-for-version` in kube-scheduler can be used to show all hidden metrics that deprecated in the previous minor release. ([#84913](https://github.com/kubernetes/kubernetes/pull/84913), [@serathius](https://github.com/serathius)) [SIG Instrumentation and Scheduling] +- Scheduler framework permit plugins now run at the end of the scheduling cycle, after reserve plugins. Waiting on permit will remain in the beginning of the binding cycle. ([#88199](https://github.com/kubernetes/kubernetes/pull/88199), [@mateuszlitwin](https://github.com/mateuszlitwin)) [SIG Scheduling] +- The kubelet and the default docker runtime now support running ephemeral containers in the Linux process namespace of a target container. Other container runtimes must implement this feature before it will be available in that runtime. ([#84731](https://github.com/kubernetes/kubernetes/pull/84731), [@verb](https://github.com/verb)) [SIG Node] + +### Other (Bug, Cleanup or Flake) + +- Add delays between goroutines for vm instance update ([#88094](https://github.com/kubernetes/kubernetes/pull/88094), [@aramase](https://github.com/aramase)) [SIG Cloud Provider] +- Add init containers log to cluster dump info. ([#88324](https://github.com/kubernetes/kubernetes/pull/88324), [@zhouya0](https://github.com/zhouya0)) [SIG CLI] +- CPU limits are now respected for Windows containers. If a node is over-provisioned, no weighting is used - only limits are respected. ([#86101](https://github.com/kubernetes/kubernetes/pull/86101), [@PatrickLang](https://github.com/PatrickLang)) [SIG Node, Testing and Windows] +- Cloud provider config CloudProviderBackoffMode has been removed since it won't be used anymore. ([#88463](https://github.com/kubernetes/kubernetes/pull/88463), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Evictions due to pods breaching their ephemeral storage limits are now recorded by the `kubelet_evictions` metric and can be alerted on. ([#87906](https://github.com/kubernetes/kubernetes/pull/87906), [@smarterclayton](https://github.com/smarterclayton)) [SIG Node] +- Fix: add remediation in azure disk attach/detach ([#88444](https://github.com/kubernetes/kubernetes/pull/88444), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider] +- Fix: check disk status before disk azure disk ([#88360](https://github.com/kubernetes/kubernetes/pull/88360), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider] +- Fixed cleaning of CSI raw block volumes. ([#87978](https://github.com/kubernetes/kubernetes/pull/87978), [@jsafrane](https://github.com/jsafrane)) [SIG Storage] +- Get-kube.sh uses the gcloud's current local GCP service account for auth when the provider is GCE or GKE instead of the metadata server default ([#88383](https://github.com/kubernetes/kubernetes/pull/88383), [@BenTheElder](https://github.com/BenTheElder)) [SIG Cluster Lifecycle] +- Golang/x/net has been updated to bring in fixes for CVE-2020-9283 ([#88381](https://github.com/kubernetes/kubernetes/pull/88381), [@BenTheElder](https://github.com/BenTheElder)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle and Instrumentation] +- Kubeadm now includes CoreDNS version 1.6.7 ([#86260](https://github.com/kubernetes/kubernetes/pull/86260), [@rajansandeep](https://github.com/rajansandeep)) [SIG Cluster Lifecycle] +- Kubeadm: fix the bug that 'kubeadm upgrade' hangs in single node cluster ([#88434](https://github.com/kubernetes/kubernetes/pull/88434), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle] +- Optimize kubectl version help info ([#88313](https://github.com/kubernetes/kubernetes/pull/88313), [@zhouya0](https://github.com/zhouya0)) [SIG CLI] +- Removes the deprecated command `kubectl rolling-update` ([#88057](https://github.com/kubernetes/kubernetes/pull/88057), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG Architecture, CLI and Testing] + + +# v1.18.0-alpha.5 + +[Documentation](https://docs.k8s.io) + +## Downloads for v1.18.0-alpha.5 + +filename | sha512 hash +-------- | ----------- +[kubernetes.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes.tar.gz) | `6452cac2b80721e9f577cb117c29b9ac6858812b4275c2becbf74312566f7d016e8b34019bd1bf7615131b191613bf9b973e40ad9ac8f6de9007d41ef2d7fd70` +[kubernetes-src.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-src.tar.gz) | `e41d9d4dd6910a42990051fcdca4bf5d3999df46375abd27ffc56aae9b455ae984872302d590da6aa85bba6079334fb5fe511596b415ee79843dee1c61c137da` + +### Client Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-client-darwin-386.tar.gz) | `5c95935863492b31d4aaa6be93260088dafea27663eb91edca980ca3a8485310e60441bc9050d4d577e9c3f7ffd96db516db8d64321124cec1b712e957c9fe1c` +[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-client-darwin-amd64.tar.gz) | `868faa578b3738604d8be62fae599ccc556799f1ce54807f1fe72599f20f8a1f98ad8152fac14a08a463322530b696d375253ba3653325e74b587df6e0510da3` +[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-client-linux-386.tar.gz) | `76a89d1d30b476b47f8fb808e342f89608e5c1c1787c4c06f2d7e763f9482e2ae8b31e6ad26541972e2b9a3a7c28327e3150cdd355e8b8d8b050a801bbf08d49` +[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-client-linux-amd64.tar.gz) | `07ad96a09b44d1c707d7c68312c5d69b101a3424bf1e6e9400b2e7a3fba78df04302985d473ddd640d8f3f0257be34110dbe1304b9565dd9d7a4639b7b7b85fd` +[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-client-linux-arm.tar.gz) | `c04fed9fa370a75c1b8e18b2be0821943bb9befcc784d14762ea3278e73600332a9b324d5eeaa1801d20ad6be07a553c41dcf4fa7ab3eadd0730ab043d687c8c` +[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-client-linux-arm64.tar.gz) | `4199147dea9954333df26d34248a1cb7b02ebbd6380ffcd42d9f9ed5fdabae45a59215474dab3c11436c82e60bd27cbd03b3dde288bf611cd3e78b87c783c6a9` +[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-client-linux-ppc64le.tar.gz) | `4f6d4d61d1c52d3253ca19031ebcd4bad06d19b68bbaaab5c8e8c590774faea4a5ceab1f05f2706b61780927e1467815b3479342c84d45df965aba78414727c4` +[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-client-linux-s390x.tar.gz) | `e2a454151ae5dd891230fb516a3f73f73ab97832db66fd3d12e7f1657a569f58a9fe2654d50ddd7d8ec88a5ff5094199323a4c6d7d44dcf7edb06cca11dd4de1` +[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-client-windows-386.tar.gz) | `14b262ba3b71c41f545db2a017cf1746075ada5745a858d2a62bc9df7c5dc10607220375db85e2c4cb85307b09709e58bc66a407488e0961191e3249dc7742b0` +[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-client-windows-amd64.tar.gz) | `26353c294755a917216664364b524982b7f5fc6aa832ce90134bb178df8a78604963c68873f121ea5f2626ff615bdbf2ffe54e00578739cde6df42ffae034732` + +### Server Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-server-linux-amd64.tar.gz) | `ba77e0e7c610f59647c1b2601f82752964a0f54b7ad609a89b00fcfd553d0f0249f6662becbabaa755bb769b36a2000779f08022c40fb8cc61440337481317a1` +[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-server-linux-arm.tar.gz) | `45e87b3e844ea26958b0b489e8c9b90900a3253000850f5ff9e87ffdcafba72ab8fd17b5ba092051a58a4bc277912c047a85940ec7f093dff6f9e8bf6fed3b42` +[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-server-linux-arm64.tar.gz) | `155e136e3124ead69c594eead3398d6cfdbb8f823c324880e8a7bbd1b570b05d13a77a69abd0a6758cfcc7923971cc6da4d3e0c1680fd519b632803ece00d5ce` +[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-server-linux-ppc64le.tar.gz) | `3fa0fb8221da19ad9d03278961172b7fa29a618b30abfa55e7243bb937dede8df56658acf02e6b61e7274fbc9395e237f49c62f2a83017eca2a69f67af31c01c` +[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-server-linux-s390x.tar.gz) | `db3199c3d7ba0b326d71dc8b80f50b195e79e662f71386a3b2976d47d13d7b0136887cc21df6f53e70a3d733da6eac7bbbf3bab2df8a1909a3cee4b44c32dd0b` + +### Node Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-node-linux-amd64.tar.gz) | `addcdfbad7f12647e6babb8eadf853a374605c8f18bf63f416fa4d3bf1b903aa206679d840433206423a984bb925e7983366edcdf777cf5daef6ef88e53d6dfa` +[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-node-linux-arm.tar.gz) | `b2ac54e0396e153523d116a2aaa32c919d6243931e0104cd47a23f546d710e7abdaa9eae92d978ce63c92041e63a9b56f5dd8fd06c812a7018a10ecac440f768` +[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-node-linux-arm64.tar.gz) | `7aab36f2735cba805e4fd109831a1af0f586a88db3f07581b6dc2a2aab90076b22c96b490b4f6461a8fb690bf78948b6d514274f0d6fb0664081de2d44dc48e1` +[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-node-linux-ppc64le.tar.gz) | `a579936f07ebf86f69f297ac50ba4c34caf2c0b903f73190eb581c78382b05ef36d41ade5bfd25d7b1b658cfcbee3d7125702a18e7480f9b09a62733a512a18a` +[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-node-linux-s390x.tar.gz) | `58fa0359ddd48835192fab1136a2b9b45d1927b04411502c269cda07cb8a8106536973fb4c7fedf1d41893a524c9fe2e21078fdf27bfbeed778273d024f14449` +[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.5/kubernetes-node-windows-amd64.tar.gz) | `9086c03cd92b440686cea6d8c4e48045cc46a43ab92ae0e70350b3f51804b9e2aaae7178142306768bae00d9ef6dd938167972bfa90b12223540093f735a45db` + +## Changelog since v1.18.0-alpha.3 + +### Deprecation + +- Kubeadm: command line option "kubelet-version" for `kubeadm upgrade node` has been deprecated and will be removed in a future release. ([#87942](https://github.com/kubernetes/kubernetes/pull/87942), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle] + +### API Change + +- Kubelet podresources API now provides the information about active pods only. ([#79409](https://github.com/kubernetes/kubernetes/pull/79409), [@takmatsu](https://github.com/takmatsu)) [SIG Node] +- Remove deprecated fields from .leaderElection in kubescheduler.config.k8s.io/v1alpha2 ([#87904](https://github.com/kubernetes/kubernetes/pull/87904), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling] +- Signatures on generated clientset methods have been modified to accept `context.Context` as a first argument. Signatures of generated Create, Update, and Patch methods have been updated to accept CreateOptions, UpdateOptions and PatchOptions respectively. Clientsets that with the previous interface have been added in new "deprecated" packages to allow incremental migration to the new APIs. The deprecated packages will be removed in the 1.21 release. ([#87299](https://github.com/kubernetes/kubernetes/pull/87299), [@mikedanese](https://github.com/mikedanese)) [SIG API Machinery, Apps, Auth, Autoscaling, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation, Network, Node, Scheduling, Storage, Testing and Windows] +- The k8s.io/node-api component is no longer updated. Instead, use the RuntimeClass types located within k8s.io/api, and the generated clients located within k8s.io/client-go ([#87503](https://github.com/kubernetes/kubernetes/pull/87503), [@liggitt](https://github.com/liggitt)) [SIG Node and Release] + +### Feature + +- Add indexer for storage cacher ([#85445](https://github.com/kubernetes/kubernetes/pull/85445), [@shaloulcy](https://github.com/shaloulcy)) [SIG API Machinery] +- Add support for mount options to the FC volume plugin ([#87499](https://github.com/kubernetes/kubernetes/pull/87499), [@ejweber](https://github.com/ejweber)) [SIG Storage] +- Added a config-mode flag in azure auth module to enable getting AAD token without spn: prefix in audience claim. When it's not specified, the default behavior doesn't change. ([#87630](https://github.com/kubernetes/kubernetes/pull/87630), [@weinong](https://github.com/weinong)) [SIG API Machinery, Auth, CLI and Cloud Provider] +- Introduced BackoffManager interface for backoff management ([#87829](https://github.com/kubernetes/kubernetes/pull/87829), [@zhan849](https://github.com/zhan849)) [SIG API Machinery] +- PodTopologySpread plugin now excludes terminatingPods when making scheduling decisions. ([#87845](https://github.com/kubernetes/kubernetes/pull/87845), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling] +- Promote CSIMigrationOpenStack to Beta (off by default since it requires installation of the OpenStack Cinder CSI Driver) + The in-tree AWS OpenStack Cinder "kubernetes.io/cinder" was already deprecated a while ago and will be removed in 1.20. Users should enable CSIMigration + CSIMigrationOpenStack features and install the OpenStack Cinder CSI Driver (https://github.com/kubernetes-sigs/cloud-provider-openstack) to avoid disruption to existing Pod and PVC objects at that time. + Users should start using the OpenStack Cinder CSI Driver directly for any new volumes. ([#85637](https://github.com/kubernetes/kubernetes/pull/85637), [@dims](https://github.com/dims)) [SIG Cloud Provider] + +### Design + +- The scheduler Permit extension point doesn't return a boolean value in its Allow() and Reject() functions. ([#87936](https://github.com/kubernetes/kubernetes/pull/87936), [@Huang-Wei](https://github.com/Huang-Wei)) [SIG Scheduling] + +### Other (Bug, Cleanup or Flake) + +- Adds "volume.beta.kubernetes.io/migrated-to" annotation to PV's and PVC's when they are migrated to signal external provisioners to pick up those objects for Provisioning and Deleting. ([#87098](https://github.com/kubernetes/kubernetes/pull/87098), [@davidz627](https://github.com/davidz627)) [SIG Apps and Storage] +- Fix a bug in the dual-stack IPVS proxier where stale IPv6 endpoints were not being cleaned up ([#87695](https://github.com/kubernetes/kubernetes/pull/87695), [@andrewsykim](https://github.com/andrewsykim)) [SIG Network] +- Fix kubectl drain ignore daemonsets and others. ([#87361](https://github.com/kubernetes/kubernetes/pull/87361), [@zhouya0](https://github.com/zhouya0)) [SIG CLI] +- Fix: add azure disk migration support for CSINode ([#88014](https://github.com/kubernetes/kubernetes/pull/88014), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider and Storage] +- Fix: add non-retriable errors in azure clients ([#87941](https://github.com/kubernetes/kubernetes/pull/87941), [@andyzhangx](https://github.com/andyzhangx)) [SIG Cloud Provider] +- Fixed NetworkPolicy validation that Except values are accepted when they are outside the CIDR range. ([#86578](https://github.com/kubernetes/kubernetes/pull/86578), [@tnqn](https://github.com/tnqn)) [SIG Network] +- Improves performance of the node authorizer ([#87696](https://github.com/kubernetes/kubernetes/pull/87696), [@liggitt](https://github.com/liggitt)) [SIG Auth] +- Iptables/userspace proxy: improve performance by getting local addresses only once per sync loop, instead of for every external IP ([#85617](https://github.com/kubernetes/kubernetes/pull/85617), [@andrewsykim](https://github.com/andrewsykim)) [SIG API Machinery, CLI, Cloud Provider, Cluster Lifecycle, Instrumentation and Network] +- Kube-aggregator: always sets unavailableGauge metric to reflect the current state of a service. ([#87778](https://github.com/kubernetes/kubernetes/pull/87778), [@p0lyn0mial](https://github.com/p0lyn0mial)) [SIG API Machinery] +- Kubeadm allows to configure single-stack clusters if dual-stack is enabled ([#87453](https://github.com/kubernetes/kubernetes/pull/87453), [@aojea](https://github.com/aojea)) [SIG API Machinery, Cluster Lifecycle and Network] +- Kubeadm: 'kubeadm alpha kubelet config download' has been removed, please use 'kubeadm upgrade node phase kubelet-config' instead ([#87944](https://github.com/kubernetes/kubernetes/pull/87944), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle] +- Kubeadm: remove 'kubeadm upgrade node config' command since it was deprecated in v1.15, please use 'kubeadm upgrade node phase kubelet-config' instead ([#87975](https://github.com/kubernetes/kubernetes/pull/87975), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle] +- Kubectl describe and kubectl top pod will return a message saying "No resources found" or "No resources found in namespace" if there are no results to display. ([#87527](https://github.com/kubernetes/kubernetes/pull/87527), [@brianpursley](https://github.com/brianpursley)) [SIG CLI] +- Kubelet metrics gathered through metrics-server or prometheus should no longer timeout for Windows nodes running more than 3 pods. ([#87730](https://github.com/kubernetes/kubernetes/pull/87730), [@marosset](https://github.com/marosset)) [SIG Node, Testing and Windows] +- Kubelet metrics have been changed to buckets. + For example the exec/{podNamespace}/{podID}/{containerName} is now just exec. ([#87913](https://github.com/kubernetes/kubernetes/pull/87913), [@cheftako](https://github.com/cheftako)) [SIG Node] +- Limit number of instances in a single update to GCE target pool to 1000. ([#87881](https://github.com/kubernetes/kubernetes/pull/87881), [@wojtek-t](https://github.com/wojtek-t)) [SIG Cloud Provider, Network and Scalability] +- Make Azure clients only retry on specified HTTP status codes ([#88017](https://github.com/kubernetes/kubernetes/pull/88017), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Pause image contains "Architecture" in non-amd64 images ([#87954](https://github.com/kubernetes/kubernetes/pull/87954), [@BenTheElder](https://github.com/BenTheElder)) [SIG Release] +- Pods that are considered for preemption and haven't started don't produce an error log. ([#87900](https://github.com/kubernetes/kubernetes/pull/87900), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling] +- Prevent error message from being displayed when running kubectl plugin list and your path includes an empty string ([#87633](https://github.com/kubernetes/kubernetes/pull/87633), [@brianpursley](https://github.com/brianpursley)) [SIG CLI] +- `kubectl create clusterrolebinding` creates rbac.authorization.k8s.io/v1 object ([#85889](https://github.com/kubernetes/kubernetes/pull/85889), [@oke-py](https://github.com/oke-py)) [SIG CLI] + +# v1.18.0-alpha.4 + +[Documentation](https://docs.k8s.io) + +## Important note about manual tag + +Due to a [tagging bug in our Release Engineering tooling](https://github.com/kubernetes/release/issues/1080) during `v1.18.0-alpha.3`, we needed to push a manual tag (`v1.18.0-alpha.4`). + +**No binaries have been produced or will be provided for `v1.18.0-alpha.4`.** + +The changelog for `v1.18.0-alpha.4` is included as part of the [changelog since v1.18.0-alpha.3][#changelog-since-v1180-alpha3] section. + +# v1.18.0-alpha.3 + +[Documentation](https://docs.k8s.io) + +## Downloads for v1.18.0-alpha.3 + +filename | sha512 hash +-------- | ----------- +[kubernetes.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes.tar.gz) | `60bf3bfc23b428f53fd853bac18a4a905b980fcc0bacd35ccd6357a89cfc26e47de60975ea6b712e65980e6b9df82a22331152d9f08ed4dba44558ba23a422d4` +[kubernetes-src.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-src.tar.gz) | `8adf1016565a7c93713ab6fa4293c2d13b4f6e4e1ec4dcba60bd71e218b4dbe9ef5eb7dbb469006743f498fc7ddeb21865cd12bec041af60b1c0edce8b7aecd5` + +### Client Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-client-darwin-386.tar.gz) | `abb32e894e8280c772e96227b574da81cd1eac374b8d29158b7f222ed550087c65482eef4a9817dfb5f2baf0d9b85fcdfa8feced0fbc1aacced7296853b57e1f` +[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-client-darwin-amd64.tar.gz) | `5e4b1a993264e256ec1656305de7c306094cae9781af8f1382df4ce4eed48ce030827fde1a5e757d4ad57233d52075c9e4e93a69efbdc1102e4ba810705ccddc` +[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-client-linux-386.tar.gz) | `68da39c2ae101d2b38f6137ceda07eb0c2124794982a62ef483245dbffb0611c1441ca085fa3127e7a9977f45646788832a783544ff06954114548ea0e526e46` +[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-client-linux-amd64.tar.gz) | `dc236ffa8ad426620e50181419e9bebe3c161e953dbfb8a019f61b11286e1eb950b40d7cc03423bdf3e6974973bcded51300f98b55570c29732fa492dcde761d` +[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-client-linux-arm.tar.gz) | `ab0a8bd6dc31ea160b731593cdc490b3cc03668b1141cf95310bd7060dcaf55c7ee9842e0acae81063fdacb043c3552ccdd12a94afd71d5310b3ce056fdaa06c` +[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-client-linux-arm64.tar.gz) | `159ea083c601710d0d6aea423eeb346c99ffaf2abd137d35a53e87a07f5caf12fca8790925f3196f67b768fa92a024f83b50325dbca9ccd4dde6c59acdce3509` +[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-client-linux-ppc64le.tar.gz) | `16b0459adfa26575d13be49ab53ac7f0ffd05e184e4e13d2dfbfe725d46bb8ac891e1fd8aebe36ecd419781d4cc5cf3bd2aaaf5263cf283724618c4012408f40` +[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-client-linux-s390x.tar.gz) | `d5aa1f5d89168995d2797eb839a04ce32560f405b38c1c0baaa0e313e4771ae7bb3b28e22433ad5897d36aadf95f73eb69d8d411d31c4115b6b0adf5fe041f85` +[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-client-windows-386.tar.gz) | `374e16a1e52009be88c94786f80174d82dff66399bf294c9bee18a2159c42251c5debef1109a92570799148b08024960c6c50b8299a93fd66ebef94f198f34e9` +[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-client-windows-amd64.tar.gz) | `5a94c1068c19271f810b994adad8e62fae03b3d4473c7c9e6d056995ff7757ea61dd8d140c9267dd41e48808876673ce117826d35a3c1bb5652752f11a044d57` + +### Server Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-server-linux-amd64.tar.gz) | `a677bec81f0eba75114b92ff955bac74512b47e53959d56a685dae5edd527283d91485b1e86ad74ef389c5405863badf7eb22e2f0c9a568a4d0cb495c6a5c32f` +[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-server-linux-arm.tar.gz) | `2fb696f86ff13ebeb5f3cf2b254bf41303644c5ea84a292782eac6123550702655284d957676d382698c091358e5c7fe73f32803699c19be7138d6530fe413b6` +[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-server-linux-arm64.tar.gz) | `738e95da9cfb8f1309479078098de1c38cef5e1dd5ee1129b77651a936a412b7cd0cf15e652afc7421219646a98846ab31694970432e48dea9c9cafa03aa59cf` +[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-server-linux-ppc64le.tar.gz) | `7a85bfcbb2aa636df60c41879e96e788742ecd72040cb0db2a93418439c125218c58a4cfa96d01b0296c295793e94c544e87c2d98d50b49bc4cb06b41f874376` +[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-server-linux-s390x.tar.gz) | `1f1cdb2efa3e7cac857203d8845df2fdaa5cf1f20df764efffff29371945ec58f6deeba06f8fbf70b96faf81b0c955bf4cb84e30f9516cb2cc1ed27c2d2185a6` + +### Node Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-node-linux-amd64.tar.gz) | `4ccfced3f5ba4adfa58f4a9d1b2c5bdb3e89f9203ab0e27d11eb1c325ac323ebe63c015d2c9d070b233f5d1da76cab5349da3528511c1cd243e66edc9af381c4` +[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-node-linux-arm.tar.gz) | `d695a69d18449062e4c129e54ec8384c573955f8108f4b78adc2ec929719f2196b995469c728dd6656c63c44cda24315543939f85131ebc773cfe0de689df55b` +[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-node-linux-arm64.tar.gz) | `21df1da88c89000abc22f97e482c3aaa5ce53ec9628d83dda2e04a1d86c4d53be46c03ed6f1f211df3ee5071bce39d944ff7716b5b6ada3b9c4821d368b0a898` +[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-node-linux-ppc64le.tar.gz) | `ff77e3aacb6ed9d89baed92ef542c8b5cec83151b6421948583cf608bca3b779dce41fc6852961e00225d5e1502f6a634bfa61a36efa90e1aee90dedb787c2d2` +[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-node-linux-s390x.tar.gz) | `57d75b7977ec1a0f6e7ed96a304dbb3b8664910f42ca19aab319a9ec33535ff5901dfca4abcb33bf5741cde6d152acd89a5f8178f0efe1dc24430e0c1af5b98f` +[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.3/kubernetes-node-windows-amd64.tar.gz) | `63fdbb71773cfd73a914c498e69bb9eea3fc314366c99ffb8bd42ec5b4dae807682c83c1eb5cfb1e2feb4d11d9e49cc85ba644e954241320a835798be7653d61` + +## Changelog since v1.18.0-alpha.2 + +### Deprecation + +- Remove all the generators from kubectl run. It will now only create pods. Additionally, deprecates all the flags that are not relevant anymore. ([#87077](https://github.com/kubernetes/kubernetes/pull/87077), [@soltysh](https://github.com/soltysh)) [SIG Architecture, SIG CLI, and SIG Testing] +- kubeadm: kube-dns is deprecated and will not be supported in a future version ([#86574](https://github.com/kubernetes/kubernetes/pull/86574), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle] + +### API Change + +- Add kubescheduler.config.k8s.io/v1alpha2 ([#87628](https://github.com/kubernetes/kubernetes/pull/87628), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling] +- --enable-cadvisor-endpoints is now disabled by default. If you need access to the cAdvisor v1 Json API please enable it explicitly in the kubelet command line. Please note that this flag was deprecated in 1.15 and will be removed in 1.19. ([#87440](https://github.com/kubernetes/kubernetes/pull/87440), [@dims](https://github.com/dims)) [SIG Instrumentation, SIG Node, and SIG Testing] +- The following feature gates are removed, because the associated features were unconditionally enabled in previous releases: CustomResourceValidation, CustomResourceSubresources, CustomResourceWebhookConversion, CustomResourcePublishOpenAPI, CustomResourceDefaulting ([#87475](https://github.com/kubernetes/kubernetes/pull/87475), [@liggitt](https://github.com/liggitt)) [SIG API Machinery] + +### Feature + +- aggragation api will have alpha support for network proxy ([#87515](https://github.com/kubernetes/kubernetes/pull/87515), [@Sh4d1](https://github.com/Sh4d1)) [SIG API Machinery] +- API request throttling (due to a high rate of requests) is now reported in client-go logs at log level 2. The messages are of the form + + Throttling request took 1.50705208s, request: GET: + + The presence of these messages, may indicate to the administrator the need to tune the cluster accordingly. ([#87740](https://github.com/kubernetes/kubernetes/pull/87740), [@jennybuckley](https://github.com/jennybuckley)) [SIG API Machinery] +- kubeadm: reject a node joining the cluster if a node with the same name already exists ([#81056](https://github.com/kubernetes/kubernetes/pull/81056), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle] +- disableAvailabilitySetNodes is added to avoid VM list for VMSS clusters. It should only be used when vmType is "vmss" and all the nodes (including masters) are VMSS virtual machines. ([#87685](https://github.com/kubernetes/kubernetes/pull/87685), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- The kubectl --dry-run flag now accepts the values 'client', 'server', and 'none', to support client-side and server-side dry-run strategies. The boolean and unset values for the --dry-run flag are deprecated and a value will be required in a future version. ([#87580](https://github.com/kubernetes/kubernetes/pull/87580), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG CLI] +- Add support for pre-allocated hugepages for more than one page size ([#82820](https://github.com/kubernetes/kubernetes/pull/82820), [@odinuge](https://github.com/odinuge)) [SIG Apps] +- Update CNI version to v0.8.5 ([#78819](https://github.com/kubernetes/kubernetes/pull/78819), [@justaugustus](https://github.com/justaugustus)) [SIG API Machinery, SIG Cluster Lifecycle, SIG Network, SIG Release, and SIG Testing] +- Skip default spreading scoring plugin for pods that define TopologySpreadConstraints ([#87566](https://github.com/kubernetes/kubernetes/pull/87566), [@skilxn-go](https://github.com/skilxn-go)) [SIG Scheduling] +- Added more details to taint toleration errors ([#87250](https://github.com/kubernetes/kubernetes/pull/87250), [@starizard](https://github.com/starizard)) [SIG Apps, and SIG Scheduling] +- Scheduler: Add DefaultBinder plugin ([#87430](https://github.com/kubernetes/kubernetes/pull/87430), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling, and SIG Testing] +- Kube-apiserver metrics will now include request counts, latencies, and response sizes for /healthz, /livez, and /readyz requests. ([#83598](https://github.com/kubernetes/kubernetes/pull/83598), [@jktomer](https://github.com/jktomer)) [SIG API Machinery] + +### Other (Bug, Cleanup or Flake) + +- Fix the masters rolling upgrade causing thundering herd of LISTs on etcd leading to control plane unavailability. ([#86430](https://github.com/kubernetes/kubernetes/pull/86430), [@wojtek-t](https://github.com/wojtek-t)) [SIG API Machinery, SIG Node, and SIG Testing] +- `kubectl diff` now returns 1 only on diff finding changes, and >1 on kubectl errors. The "exit status code 1" message as also been muted. ([#87437](https://github.com/kubernetes/kubernetes/pull/87437), [@apelisse](https://github.com/apelisse)) [SIG CLI, and SIG Testing] +- To reduce chances of throttling, VM cache is set to nil when Azure node provisioning state is deleting ([#87635](https://github.com/kubernetes/kubernetes/pull/87635), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider] +- Fix regression in statefulset conversion which prevented applying a statefulset multiple times. ([#87706](https://github.com/kubernetes/kubernetes/pull/87706), [@liggitt](https://github.com/liggitt)) [SIG Apps, and SIG Testing] +- fixed two scheduler metrics (pending_pods and schedule_attempts_total) not being recorded ([#87692](https://github.com/kubernetes/kubernetes/pull/87692), [@everpeace](https://github.com/everpeace)) [SIG Scheduling] +- Resolved a performance issue in the node authorizer index maintenance. ([#87693](https://github.com/kubernetes/kubernetes/pull/87693), [@liggitt](https://github.com/liggitt)) [SIG Auth] +- Removed the 'client' label from apiserver_request_total. ([#87669](https://github.com/kubernetes/kubernetes/pull/87669), [@logicalhan](https://github.com/logicalhan)) [SIG API Machinery, and SIG Instrumentation] +- `(*"k8s.io/client-go/rest".Request).{Do,DoRaw,Stream,Watch}` now require callers to pass a `context.Context` as an argument. The context is used for timeout and cancellation signaling and to pass supplementary information to round trippers in the wrapped transport chain. If you don't need any of this functionality, it is sufficient to pass a context created with `context.Background()` to these functions. The `(*"k8s.io/client-go/rest".Request).Context` method is removed now that all methods that execute a request accept a context directly. ([#87597](https://github.com/kubernetes/kubernetes/pull/87597), [@mikedanese](https://github.com/mikedanese)) [SIG API Machinery, SIG Apps, SIG Auth, SIG Autoscaling, SIG CLI, SIG Cloud Provider, SIG Cluster Lifecycle, SIG Instrumentation, SIG Network, SIG Node, SIG Scheduling, SIG Storage, and SIG Testing] +- For volumes that allow attaches across multiple nodes, attach and detach operations across different nodes are now executed in parallel. ([#87258](https://github.com/kubernetes/kubernetes/pull/87258), [@verult](https://github.com/verult)) [SIG Apps, SIG Node, and SIG Storage] +- kubeadm: apply further improvements to the tentative support for concurrent etcd member join. Fixes a bug where multiple members can receive the same hostname. Increase the etcd client dial timeout and retry timeout for add/remove/... operations. ([#87505](https://github.com/kubernetes/kubernetes/pull/87505), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle] +- Reverted a kubectl azure auth module change where oidc claim spn: prefix was omitted resulting a breaking behavior with existing Azure AD OIDC enabled api-server ([#87507](https://github.com/kubernetes/kubernetes/pull/87507), [@weinong](https://github.com/weinong)) [SIG API Machinery, SIG Auth, and SIG Cloud Provider] +- Update cri-tools to v1.17.0 ([#86305](https://github.com/kubernetes/kubernetes/pull/86305), [@saschagrunert](https://github.com/saschagrunert)) [SIG Cluster Lifecycle, and SIG Release] +- kubeadm: remove the deprecated CoreDNS feature-gate. It was set to "true" since v1.11 when the feature went GA. In v1.13 it was marked as deprecated and hidden from the CLI. ([#87400](https://github.com/kubernetes/kubernetes/pull/87400), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle] +- Shared informers are now more reliable in the face of network disruption. ([#86015](https://github.com/kubernetes/kubernetes/pull/86015), [@squeed](https://github.com/squeed)) [SIG API Machinery] +- the CSR signing cert/key pairs will be reloaded from disk like the kube-apiserver cert/key pairs ([#86816](https://github.com/kubernetes/kubernetes/pull/86816), [@deads2k](https://github.com/deads2k)) [SIG API Machinery, SIG Apps, and SIG Auth] +- "kubectl describe statefulsets.apps" prints garbage for rolling update partition ([#85846](https://github.com/kubernetes/kubernetes/pull/85846), [@phil9909](https://github.com/phil9909)) [SIG CLI] + + + + + +# v1.18.0-alpha.2 + +[Documentation](https://docs.k8s.io) + +## Downloads for v1.18.0-alpha.2 + + +filename | sha512 hash +-------- | ----------- +[kubernetes.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes.tar.gz) | `7af83386b4b35353f0aa1bdaf73599eb08b1d1ca11ecc2c606854aff754db69f3cd3dc761b6d7fc86f01052f615ca53185f33dbf9e53b2f926b0f02fc103fbd3` +[kubernetes-src.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-src.tar.gz) | `a14b02a0a0bde97795a836a8f5897b0ee6b43e010e13e43dd4cca80a5b962a1ef3704eedc7916fed1c38ec663a71db48c228c91e5daacba7d9370df98c7ddfb6` + +### Client Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-client-darwin-386.tar.gz) | `427f214d47ded44519007de2ae87160c56c2920358130e474b768299751a9affcbc1b1f0f936c39c6138837bca2a97792a6700896976e98c4beee8a1944cfde1` +[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-client-darwin-amd64.tar.gz) | `861fd81ac3bd45765575bedf5e002a2294aba48ef9e15980fc7d6783985f7d7fcde990ea0aef34690977a88df758722ec0a2e170d5dcc3eb01372e64e5439192` +[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-client-linux-386.tar.gz) | `7d59b05d6247e2606a8321c72cd239713373d876dbb43b0fb7f1cb857fa6c998038b41eeed78d9eb67ce77b0b71776ceed428cce0f8d2203c5181b473e0bd86c` +[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-client-linux-amd64.tar.gz) | `7cdefb4e32bad9d2df5bb8e7e0a6f4dab2ae6b7afef5d801ac5c342d4effdeacd799081fa2dec699ecf549200786c7623c3176252010f12494a95240dd63311d` +[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-client-linux-arm.tar.gz) | `6212bbf0fa1d01ced77dcca2c4b76b73956cd3c6b70e0701c1fe0df5ff37160835f6b84fa2481e0e6979516551b14d8232d1c72764a559a3652bfe2a1e7488ff` +[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-client-linux-arm64.tar.gz) | `1f0d9990700510165ee471acb2f88222f1b80e8f6deb351ce14cf50a70a9840fb99606781e416a13231c74b2bd7576981b5348171aa33b628d2666e366cd4629` +[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-client-linux-ppc64le.tar.gz) | `77e00ba12a32db81e96f8de84609de93f32c61bb3f53875a57496d213aa6d1b92c09ad5a6de240a78e1a5bf77fac587ff92874f34a10f8909ae08ca32fda45d2` +[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-client-linux-s390x.tar.gz) | `a39ec2044bed5a4570e9c83068e0fc0ce923ccffa44380f8bbc3247426beaff79c8a84613bcb58b05f0eb3afbc34c79fe3309aa2e0b81abcfd0aa04770e62e05` +[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-client-windows-386.tar.gz) | `1a0ab88f9b7e34b60ab31d5538e97202a256ad8b7b7ed5070cae5f2f12d5d4edeae615db7a34ebbe254004b6393c6b2480100b09e30e59c9139492a3019a596a` +[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-client-windows-amd64.tar.gz) | `1966eb5dfb78c1bc33aaa6389f32512e3aa92584250a0164182f3566c81d901b59ec78ee4e25df658bc1dd221b5a9527d6ce3b6c487ca3e3c0b319a077caa735` + +### Server Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-server-linux-amd64.tar.gz) | `f814d6a3872e4572aa4da297c29def4c1fad8eba0903946780b6bf9788c72b99d71085c5aef9e12c01133b26fa4563c1766ba724ad2a8af2670a24397951a94d` +[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-server-linux-arm.tar.gz) | `56aa08225e546c92c2ff88ac57d3db7dd5e63640772ea72a429f080f7069827138cbc206f6f5fe3a0c01bfca043a9eda305ecdc1dcb864649114893e46b6dc84` +[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-server-linux-arm64.tar.gz) | `fb87128d905211ba097aa860244a376575ae2edbaca6e51402a24bc2964854b9b273e09df3d31a2bcffc91509f7eecb2118b183fb0e0eb544f33403fa235c274` +[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-server-linux-ppc64le.tar.gz) | `6d21fbf39b9d3a0df9642407d6f698fabdc809aca83af197bceb58a81b25846072f407f8fb7caae2e02dc90912e3e0f5894f062f91bcb69f8c2329625d3dfeb7` +[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-server-linux-s390x.tar.gz) | `ddcda4dc360ca97705f71bf2a18ddacd7b7ddf77535b62e699e97a1b2dd24843751313351d0112e238afe69558e8271eba4d27ab77bb67b4b9e3fbde6eec85c9` + +### Node Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-node-linux-amd64.tar.gz) | `78915a9bde35c70c67014f0cea8754849db4f6a84491a3ad9678fd3bc0203e43af5a63cfafe104ae1d56b05ce74893a87a6dcd008d7859e1af6b3bce65425b5d` +[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-node-linux-arm.tar.gz) | `3218e811abcb0cb09d80742def339be3916db5e9bbc62c0dc8e6d87085f7e3d9eeed79dea081906f1de78ddd07b7e3acdbd7765fdb838d262bb35602fd1df106` +[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-node-linux-arm64.tar.gz) | `fa22de9c4440b8fb27f4e77a5a63c5e1c8aa8aa30bb79eda843b0f40498c21b8c0ad79fff1d841bb9fef53fe20da272506de9a86f81a0b36d028dbeab2e482ce` +[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-node-linux-ppc64le.tar.gz) | `bbda9b5cc66e8f13d235703b2a85e2c4f02fa16af047be4d27a3e198e11eb11706e4a0fbb6c20978c770b069cd4cd9894b661f09937df9d507411548c36576e0` +[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-node-linux-s390x.tar.gz) | `b2ed1eda013069adce2aac00b86d75b84e006cfce9bafac0b5a2bafcb60f8f2cb346b5ea44eafa72d777871abef1ea890eb3a2a05de28968f9316fa88886a8ed` +[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.2/kubernetes-node-windows-amd64.tar.gz) | `bd8eb23dba711f31b5148257076b1bbe9629f2a75de213b2c779bd5b29279e9bf22f8bde32f4bc814f4c0cc49e19671eb8b24f4105f0fe2c1490c4b78ec3c704` + +## Changelog since v1.18.0-alpha.1 + +### Other notable changes + +* Bump golang/mock version to v1.3.1 ([#87326](https://github.com/kubernetes/kubernetes/pull/87326), [@wawa0210](https://github.com/wawa0210)) +* fix a bug that orphan revision cannot be adopted and statefulset cannot be synced ([#86801](https://github.com/kubernetes/kubernetes/pull/86801), [@likakuli](https://github.com/likakuli)) +* Azure storage clients now suppress requests on throttling ([#87306](https://github.com/kubernetes/kubernetes/pull/87306), [@feiskyer](https://github.com/feiskyer)) +* Introduce Alpha field `Immutable` in both Secret and ConfigMap objects to mark their contents as immutable. The implementation is hidden behind feature gate `ImmutableEphemeralVolumes` (currently in Alpha stage). ([#86377](https://github.com/kubernetes/kubernetes/pull/86377), [@wojtek-t](https://github.com/wojtek-t)) +* EndpointSlices will now be enabled by default. A new `EndpointSliceProxying` feature gate determines if kube-proxy will use EndpointSlices, this is disabled by default. ([#86137](https://github.com/kubernetes/kubernetes/pull/86137), [@robscott](https://github.com/robscott)) +* kubeadm upgrades always persist the etcd backup for stacked ([#86861](https://github.com/kubernetes/kubernetes/pull/86861), [@SataQiu](https://github.com/SataQiu)) +* Fix the bug PIP's DNS is deleted if no DNS label service annotation isn't set. ([#87246](https://github.com/kubernetes/kubernetes/pull/87246), [@nilo19](https://github.com/nilo19)) +* New flag `--show-hidden-metrics-for-version` in kube-controller-manager can be used to show all hidden metrics that deprecated in the previous minor release. ([#85281](https://github.com/kubernetes/kubernetes/pull/85281), [@RainbowMango](https://github.com/RainbowMango)) +* Azure network and VM clients now suppress requests on throttling ([#87122](https://github.com/kubernetes/kubernetes/pull/87122), [@feiskyer](https://github.com/feiskyer)) +* `kubectl apply -f --prune -n ` should prune all resources not defined in the file in the cli specified namespace. ([#85613](https://github.com/kubernetes/kubernetes/pull/85613), [@MartinKaburu](https://github.com/MartinKaburu)) +* Fixes service account token admission error in clusters that do not run the service account token controller ([#87029](https://github.com/kubernetes/kubernetes/pull/87029), [@liggitt](https://github.com/liggitt)) +* CustomResourceDefinition status fields are no longer required for client validation when submitting manifests. ([#87213](https://github.com/kubernetes/kubernetes/pull/87213), [@hasheddan](https://github.com/hasheddan)) +* All apiservers log request lines in a more greppable format. ([#87203](https://github.com/kubernetes/kubernetes/pull/87203), [@lavalamp](https://github.com/lavalamp)) +* provider/azure: Network security groups can now be in a separate resource group. ([#87035](https://github.com/kubernetes/kubernetes/pull/87035), [@CecileRobertMichon](https://github.com/CecileRobertMichon)) +* Cleaned up the output from `kubectl describe CSINode `. ([#85283](https://github.com/kubernetes/kubernetes/pull/85283), [@huffmanca](https://github.com/huffmanca)) +* Fixed the following ([#84265](https://github.com/kubernetes/kubernetes/pull/84265), [@bhagwat070919](https://github.com/bhagwat070919)) + * - AWS Cloud Provider attempts to delete LoadBalancer security group it didn’t provision + * - AWS Cloud Provider creates default LoadBalancer security group even if annotation [service.beta.kubernetes.io/aws-load-balancer-security-groups] is present +* kubelet: resource metrics endpoint `/metrics/resource/v1alpha1` as well as all metrics under this endpoint have been deprecated. ([#86282](https://github.com/kubernetes/kubernetes/pull/86282), [@RainbowMango](https://github.com/RainbowMango)) + * Please convert to the following metrics emitted by endpoint `/metrics/resource`: + * - scrape_error --> scrape_error + * - node_cpu_usage_seconds_total --> node_cpu_usage_seconds + * - node_memory_working_set_bytes --> node_memory_working_set_bytes + * - container_cpu_usage_seconds_total --> container_cpu_usage_seconds + * - container_memory_working_set_bytes --> container_memory_working_set_bytes + * - scrape_error --> scrape_error +* You can now pass "--node-ip ::" to kubelet to indicate that it should autodetect an IPv6 address to use as the node's primary address. ([#85850](https://github.com/kubernetes/kubernetes/pull/85850), [@danwinship](https://github.com/danwinship)) +* kubeadm: support automatic retry after failing to pull image ([#86899](https://github.com/kubernetes/kubernetes/pull/86899), [@SataQiu](https://github.com/SataQiu)) +* TODO ([#87044](https://github.com/kubernetes/kubernetes/pull/87044), [@jennybuckley](https://github.com/jennybuckley)) +* Improved yaml parsing performance ([#85458](https://github.com/kubernetes/kubernetes/pull/85458), [@cjcullen](https://github.com/cjcullen)) +* Fixed a bug which could prevent a provider ID from ever being set for node if an error occurred determining the provider ID when the node was added. ([#87043](https://github.com/kubernetes/kubernetes/pull/87043), [@zjs](https://github.com/zjs)) +* fix a regression in kubenet that prevent pods to obtain ip addresses ([#85993](https://github.com/kubernetes/kubernetes/pull/85993), [@chendotjs](https://github.com/chendotjs)) +* Bind kube-dns containers to linux nodes to avoid Windows scheduling ([#83358](https://github.com/kubernetes/kubernetes/pull/83358), [@wawa0210](https://github.com/wawa0210)) +* The following features are unconditionally enabled and the corresponding `--feature-gates` flags have been removed: `PodPriority`, `TaintNodesByCondition`, `ResourceQuotaScopeSelectors` and `ScheduleDaemonSetPods` ([#86210](https://github.com/kubernetes/kubernetes/pull/86210), [@draveness](https://github.com/draveness)) +* Bind dns-horizontal containers to linux nodes to avoid Windows scheduling on kubernetes cluster includes linux nodes and windows nodes ([#83364](https://github.com/kubernetes/kubernetes/pull/83364), [@wawa0210](https://github.com/wawa0210)) +* fix kubectl annotate error when local=true is set ([#86952](https://github.com/kubernetes/kubernetes/pull/86952), [@zhouya0](https://github.com/zhouya0)) +* Bug fixes: ([#84163](https://github.com/kubernetes/kubernetes/pull/84163), [@david-tigera](https://github.com/david-tigera)) + * Make sure we include latest packages node #351 ([@caseydavenport](https://github.com/caseydavenport)) +* fix kuebctl apply set-last-applied namespaces error ([#86474](https://github.com/kubernetes/kubernetes/pull/86474), [@zhouya0](https://github.com/zhouya0)) +* Add VolumeBinder method to FrameworkHandle interface, which allows user to get the volume binder when implementing scheduler framework plugins. ([#86940](https://github.com/kubernetes/kubernetes/pull/86940), [@skilxn-go](https://github.com/skilxn-go)) +* elasticsearch supports automatically setting the advertise address ([#85944](https://github.com/kubernetes/kubernetes/pull/85944), [@SataQiu](https://github.com/SataQiu)) +* If a serving certificates param specifies a name that is an IP for an SNI certificate, it will have priority for replying to server connections. ([#85308](https://github.com/kubernetes/kubernetes/pull/85308), [@deads2k](https://github.com/deads2k)) +* kube-proxy: Added dual-stack IPv4/IPv6 support to the iptables proxier. ([#82462](https://github.com/kubernetes/kubernetes/pull/82462), [@vllry](https://github.com/vllry)) +* Azure VMSS/VMSSVM clients now suppress requests on throttling ([#86740](https://github.com/kubernetes/kubernetes/pull/86740), [@feiskyer](https://github.com/feiskyer)) +* New metric kubelet_pleg_last_seen_seconds to aid diagnosis of PLEG not healthy issues. ([#86251](https://github.com/kubernetes/kubernetes/pull/86251), [@bboreham](https://github.com/bboreham)) +* For subprotocol negotiation, both client and server protocol is required now. ([#86646](https://github.com/kubernetes/kubernetes/pull/86646), [@tedyu](https://github.com/tedyu)) +* kubeadm: use bind-address option to configure the kube-controller-manager and kube-scheduler http probes ([#86493](https://github.com/kubernetes/kubernetes/pull/86493), [@aojea](https://github.com/aojea)) +* Marked scheduler's metrics scheduling_algorithm_predicate_evaluation_seconds and ([#86584](https://github.com/kubernetes/kubernetes/pull/86584), [@xiaoanyunfei](https://github.com/xiaoanyunfei)) + * scheduling_algorithm_priority_evaluation_seconds as deprecated. Those are replaced by framework_extension_point_duration_seconds[extenstion_point="Filter"] and framework_extension_point_duration_seconds[extenstion_point="Score"] respectively. +* Marked scheduler's scheduling_duration_seconds Summary metric as deprecated ([#86586](https://github.com/kubernetes/kubernetes/pull/86586), [@xiaoanyunfei](https://github.com/xiaoanyunfei)) +* Add instructions about how to bring up e2e test cluster ([#85836](https://github.com/kubernetes/kubernetes/pull/85836), [@YangLu1031](https://github.com/YangLu1031)) +* If a required flag is not provided to a command, the user will only see the required flag error message, instead of the entire usage menu. ([#86693](https://github.com/kubernetes/kubernetes/pull/86693), [@sallyom](https://github.com/sallyom)) +* kubeadm: tolerate whitespace when validating certificate authority PEM data in kubeconfig files ([#86705](https://github.com/kubernetes/kubernetes/pull/86705), [@neolit123](https://github.com/neolit123)) +* kubeadm: add support for the "ci/k8s-master" version label as a replacement for "ci-cross/*", which no longer exists. ([#86609](https://github.com/kubernetes/kubernetes/pull/86609), [@Pensu](https://github.com/Pensu)) +* Fix EndpointSlice controller race condition and ensure that it handles external changes to EndpointSlices. ([#85703](https://github.com/kubernetes/kubernetes/pull/85703), [@robscott](https://github.com/robscott)) +* Fix nil pointer dereference in azure cloud provider ([#85975](https://github.com/kubernetes/kubernetes/pull/85975), [@ldx](https://github.com/ldx)) +* fix: azure disk could not mounted on Standard_DC4s/DC2s instances ([#86612](https://github.com/kubernetes/kubernetes/pull/86612), [@andyzhangx](https://github.com/andyzhangx)) +* Fixes v1.17.0 regression in --service-cluster-ip-range handling with IPv4 ranges larger than 65536 IP addresses ([#86534](https://github.com/kubernetes/kubernetes/pull/86534), [@liggitt](https://github.com/liggitt)) +* Adds back support for AlwaysCheckAllPredicates flag. ([#86496](https://github.com/kubernetes/kubernetes/pull/86496), [@ahg-g](https://github.com/ahg-g)) +* Azure global rate limit is switched to per-client. A set of new rate limit configure options are introduced, including routeRateLimit, SubnetsRateLimit, InterfaceRateLimit, RouteTableRateLimit, LoadBalancerRateLimit, PublicIPAddressRateLimit, SecurityGroupRateLimit, VirtualMachineRateLimit, StorageAccountRateLimit, DiskRateLimit, SnapshotRateLimit, VirtualMachineScaleSetRateLimit and VirtualMachineSizeRateLimit. ([#86515](https://github.com/kubernetes/kubernetes/pull/86515), [@feiskyer](https://github.com/feiskyer)) + * The original rate limit options would be default values for those new client's rate limiter. +* Fix issue [#85805](https://github.com/kubernetes/kubernetes/pull/85805) about resource not found in azure cloud provider when lb specified in other resource group. ([#86502](https://github.com/kubernetes/kubernetes/pull/86502), [@levimm](https://github.com/levimm)) +* `AlwaysCheckAllPredicates` is deprecated in scheduler Policy API. ([#86369](https://github.com/kubernetes/kubernetes/pull/86369), [@Huang-Wei](https://github.com/Huang-Wei)) +* Kubernetes KMS provider for data encryption now supports disabling the in-memory data encryption key (DEK) cache by setting cachesize to a negative value. ([#86294](https://github.com/kubernetes/kubernetes/pull/86294), [@enj](https://github.com/enj)) +* option `preConfiguredBackendPoolLoadBalancerTypes` is added to azure cloud provider for the pre-configured load balancers, possible values: `""`, `"internal"`, "external"`, `"all"` ([#86338](https://github.com/kubernetes/kubernetes/pull/86338), [@gossion](https://github.com/gossion)) +* Promote StartupProbe to beta for 1.18 release ([#83437](https://github.com/kubernetes/kubernetes/pull/83437), [@matthyx](https://github.com/matthyx)) +* Fixes issue where AAD token obtained by kubectl is incompatible with on-behalf-of flow and oidc. ([#86412](https://github.com/kubernetes/kubernetes/pull/86412), [@weinong](https://github.com/weinong)) + * The audience claim before this fix has "spn:" prefix. After this fix, "spn:" prefix is omitted. +* change CounterVec to Counter about PLEGDiscardEvent ([#86167](https://github.com/kubernetes/kubernetes/pull/86167), [@yiyang5055](https://github.com/yiyang5055)) +* hollow-node do not use remote CRI anymore ([#86425](https://github.com/kubernetes/kubernetes/pull/86425), [@jkaniuk](https://github.com/jkaniuk)) +* hollow-node use fake CRI ([#85879](https://github.com/kubernetes/kubernetes/pull/85879), [@gongguan](https://github.com/gongguan)) + + + +# v1.18.0-alpha.1 + +[Documentation](https://docs.k8s.io) + +## Downloads for v1.18.0-alpha.1 + + +filename | sha512 hash +-------- | ----------- +[kubernetes.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes.tar.gz) | `0c4904efc7f4f1436119c91dc1b6c93b3bd9c7490362a394bff10099c18e1e7600c4f6e2fcbaeb2d342a36c4b20692715cf7aa8ada6dfac369f44cc9292529d7` +[kubernetes-src.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-src.tar.gz) | `0a50fc6816c730ca5ae4c4f26d5ad7b049607d29f6a782a4e5b4b05ac50e016486e269dafcc6a163bd15e1a192780a9a987f1bb959696993641c603ed1e841c8` + +### Client Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-client-darwin-386.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-client-darwin-386.tar.gz) | `c6d75f7f3f20bef17fc7564a619b54e6f4a673d041b7c9ec93663763a1cc8dd16aecd7a2af70e8d54825a0eecb9762cf2edfdade840604c9a32ecd9cc2d5ac3c` +[kubernetes-client-darwin-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-client-darwin-amd64.tar.gz) | `ca1f19db289933beace6daee6fc30af19b0e260634ef6e89f773464a05e24551c791be58b67da7a7e2a863e28b7cbcc7b24b6b9bf467113c26da76ac8f54fdb6` +[kubernetes-client-linux-386.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-client-linux-386.tar.gz) | `af2e673653eb39c3f24a54efc68e1055f9258bdf6cf8fea42faf42c05abefc2da853f42faac3b166c37e2a7533020b8993b98c0d6d80a5b66f39e91d8ae0a3fb` +[kubernetes-client-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-client-linux-amd64.tar.gz) | `9009032c3f94ac8a78c1322a28e16644ce3b20989eb762685a1819148aed6e883ca8e1200e5ec37ec0853f115c67e09b5d697d6cf5d4c45f653788a2d3a2f84f` +[kubernetes-client-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-client-linux-arm.tar.gz) | `afba9595b37a3f2eead6e3418573f7ce093b55467dce4da0b8de860028576b96b837a2fd942f9c276e965da694e31fbd523eeb39aefb902d7e7a2f169344d271` +[kubernetes-client-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-client-linux-arm64.tar.gz) | `04fc3b2fe3f271807f0bc6c61be52456f26a1af904964400be819b7914519edc72cbab9afab2bb2e2ba1a108963079367cedfb253c9364c0175d1fcc64d52f5c` +[kubernetes-client-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-client-linux-ppc64le.tar.gz) | `04c7edab874b33175ff7bebfff5b3a032bc6eb088fcd7387ffcd5b3fa71395ca8c5f9427b7ddb496e92087dfdb09eaf14a46e9513071d3bd73df76c182922d38` +[kubernetes-client-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-client-linux-s390x.tar.gz) | `499287dbbc33399a37b9f3b35e0124ff20b17b6619f25a207ee9c606ef261af61fa0c328dde18c7ce2d3dfb2eea2376623bc3425d16bc8515932a68b44f8bede` +[kubernetes-client-windows-386.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-client-windows-386.tar.gz) | `cf84aeddf00f126fb13c0436b116dd0464a625659e44c84bf863517db0406afb4eefd86807e7543c4f96006d275772fbf66214ae7d582db5865c84ac3545b3e6` +[kubernetes-client-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-client-windows-amd64.tar.gz) | `69f20558ccd5cd6dbaccf29307210db4e687af21f6d71f68c69d3a39766862686ac1333ab8a5012010ca5c5e3c11676b45e498e3d4c38773da7d24bcefc46d95` + +### Server Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-server-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-server-linux-amd64.tar.gz) | `3f29df2ce904a0f10db4c1d7a425a36f420867b595da3fa158ae430bfead90def2f2139f51425b349faa8a9303dcf20ea01657cb6ea28eb6ad64f5bb32ce2ed1` +[kubernetes-server-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-server-linux-arm.tar.gz) | `4a21073b2273d721fbf062c254840be5c8471a010bcc0c731b101729e36e61f637cb7fcb521a22e8d24808510242f4fff8a6ca40f10e9acd849c2a47bf135f27` +[kubernetes-server-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-server-linux-arm64.tar.gz) | `7f1cb6d721bedc90e28b16f99bea7e59f5ad6267c31ef39c14d34db6ad6aad87ee51d2acdd01b6903307c1c00b58ff6b785a03d5a491cc3f8a4df9a1d76d406c` +[kubernetes-server-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-server-linux-ppc64le.tar.gz) | `8f2b552030b5274b1c2c7c166eacd5a14b0c6ca0f23042f4c52efe87e22a167ba4460dcd66615a5ecd26d9e88336be1fb555548392e70efe59070dd2c314da98` +[kubernetes-server-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-server-linux-s390x.tar.gz) | `8d9f2c96f66edafb7c8b3aa90960d29b41471743842aede6b47b3b2e61f4306fb6fc60b9ebc18820c547ee200bfedfe254c1cde962d447c791097dd30e79abdb` + +### Node Binaries + +filename | sha512 hash +-------- | ----------- +[kubernetes-node-linux-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-node-linux-amd64.tar.gz) | `84194cb081d1502f8ca68143569f9707d96f1a28fcf0c574ebd203321463a8b605f67bb2a365eaffb14fbeb8d55c8d3fa17431780b242fb9cba3a14426a0cd4a` +[kubernetes-node-linux-arm.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-node-linux-arm.tar.gz) | `0091e108ab94fd8683b89c597c4fdc2fbf4920b007cfcd5297072c44bc3a230dfe5ceed16473e15c3e6cf5edab866d7004b53edab95be0400cc60e009eee0d9d` +[kubernetes-node-linux-arm64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-node-linux-arm64.tar.gz) | `b7e85682cc2848a35d52fd6f01c247f039ee1b5dd03345713821ea10a7fa9939b944f91087baae95eaa0665d11857c1b81c454f720add077287b091f9f19e5d3` +[kubernetes-node-linux-ppc64le.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-node-linux-ppc64le.tar.gz) | `cd1f0849e9c62b5d2c93ff0cebf58843e178d8a88317f45f76de0db5ae020b8027e9503a5fccc96445184e0d77ecdf6f57787176ac31dbcbd01323cd0a190cbb` +[kubernetes-node-linux-s390x.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-node-linux-s390x.tar.gz) | `e1e697a34424c75d75415b613b81c8af5f64384226c5152d869f12fd7db1a3e25724975b73fa3d89e56e4bf78d5fd07e68a709ba8566f53691ba6a88addc79ea` +[kubernetes-node-windows-amd64.tar.gz](https://dl.k8s.io/v1.18.0-alpha.1/kubernetes-node-windows-amd64.tar.gz) | `c725a19a4013c74e22383ad3fb4cb799b3e161c4318fdad066daf806730a89bc3be3ff0f75678d02b3cbe52b2ef0c411c0639968e200b9df470be40bb2c015cc` + +## Changelog since v1.17.0 + +### Action Required + +* action required ([#85363](https://github.com/kubernetes/kubernetes/pull/85363), [@immutableT](https://github.com/immutableT)) + * 1. Currently, if users were to explicitly specify CacheSize of 0 for KMS provider, they would end-up with a provider that caches up to 1000 keys. This PR changes this behavior. + * Post this PR, when users supply 0 for CacheSize this will result in a validation error. + * 2. CacheSize type was changed from int32 to *int32. This allows defaulting logic to differentiate between cases where users explicitly supplied 0 vs. not supplied any value. + * 3. KMS Provider's endpoint (path to Unix socket) is now validated when the EncryptionConfiguration files is loaded. This used to be handled by the GRPCService. + +### Other notable changes + +* fix: azure data disk should use same key as os disk by default ([#86351](https://github.com/kubernetes/kubernetes/pull/86351), [@andyzhangx](https://github.com/andyzhangx)) +* New flag `--show-hidden-metrics-for-version` in kube-proxy can be used to show all hidden metrics that deprecated in the previous minor release. ([#85279](https://github.com/kubernetes/kubernetes/pull/85279), [@RainbowMango](https://github.com/RainbowMango)) +* Remove cluster-monitoring addon ([#85512](https://github.com/kubernetes/kubernetes/pull/85512), [@serathius](https://github.com/serathius)) +* Changed core_pattern on COS nodes to be an absolute path. ([#86329](https://github.com/kubernetes/kubernetes/pull/86329), [@mml](https://github.com/mml)) +* Track mount operations as uncertain if operation fails with non-final error ([#82492](https://github.com/kubernetes/kubernetes/pull/82492), [@gnufied](https://github.com/gnufied)) +* add kube-proxy flags --ipvs-tcp-timeout, --ipvs-tcpfin-timeout, --ipvs-udp-timeout to configure IPVS connection timeouts. ([#85517](https://github.com/kubernetes/kubernetes/pull/85517), [@andrewsykim](https://github.com/andrewsykim)) +* The sample-apiserver aggregated conformance test has updated to use the Kubernetes v1.17.0 sample apiserver ([#84735](https://github.com/kubernetes/kubernetes/pull/84735), [@liggitt](https://github.com/liggitt)) +* The underlying format of the `CPUManager` state file has changed. Upgrades should be seamless, but any third-party tools that rely on reading the previous format need to be updated. ([#84462](https://github.com/kubernetes/kubernetes/pull/84462), [@klueska](https://github.com/klueska)) +* kubernetes will try to acquire the iptables lock every 100 msec during 5 seconds instead of every second. This specially useful for environments using kube-proxy in iptables mode with a high churn rate of services. ([#85771](https://github.com/kubernetes/kubernetes/pull/85771), [@aojea](https://github.com/aojea)) +* Fixed a panic in the kubelet cleaning up pod volumes ([#86277](https://github.com/kubernetes/kubernetes/pull/86277), [@tedyu](https://github.com/tedyu)) +* azure cloud provider cache TTL is configurable, list of the azure cloud provider is as following: ([#86266](https://github.com/kubernetes/kubernetes/pull/86266), [@zqingqing1](https://github.com/zqingqing1)) + * - "availabilitySetNodesCacheTTLInSeconds" + * - "vmssCacheTTLInSeconds" + * - "vmssVirtualMachinesCacheTTLInSeconds" + * - "vmCacheTTLInSeconds" + * - "loadBalancerCacheTTLInSeconds" + * - "nsgCacheTTLInSeconds" + * - "routeTableCacheTTLInSeconds" +* Fixes kube-proxy when EndpointSlice feature gate is enabled on Windows. ([#86016](https://github.com/kubernetes/kubernetes/pull/86016), [@robscott](https://github.com/robscott)) +* Fixes wrong validation result of NetworkPolicy PolicyTypes ([#85747](https://github.com/kubernetes/kubernetes/pull/85747), [@tnqn](https://github.com/tnqn)) +* Fixes an issue with kubelet-reported pod status on deleted/recreated pods. ([#86320](https://github.com/kubernetes/kubernetes/pull/86320), [@liggitt](https://github.com/liggitt)) +* kube-apiserver no longer serves the following deprecated APIs: ([#85903](https://github.com/kubernetes/kubernetes/pull/85903), [@liggitt](https://github.com/liggitt)) + * All resources under `apps/v1beta1` and `apps/v1beta2` - use `apps/v1` instead + * `daemonsets`, `deployments`, `replicasets` resources under `extensions/v1beta1` - use `apps/v1` instead + * `networkpolicies` resources under `extensions/v1beta1` - use `networking.k8s.io/v1` instead + * `podsecuritypolicies` resources under `extensions/v1beta1` - use `policy/v1beta1` instead +* kubeadm: fix potential panic when executing "kubeadm reset" with a corrupted kubelet.conf file ([#86216](https://github.com/kubernetes/kubernetes/pull/86216), [@neolit123](https://github.com/neolit123)) +* Fix a bug in port-forward: named port not working with service ([#85511](https://github.com/kubernetes/kubernetes/pull/85511), [@oke-py](https://github.com/oke-py)) +* kube-proxy no longer modifies shared EndpointSlices. ([#86092](https://github.com/kubernetes/kubernetes/pull/86092), [@robscott](https://github.com/robscott)) +* allow for configuration of CoreDNS replica count ([#85837](https://github.com/kubernetes/kubernetes/pull/85837), [@pickledrick](https://github.com/pickledrick)) +* Fixed a regression where the kubelet would fail to update the ready status of pods. ([#84951](https://github.com/kubernetes/kubernetes/pull/84951), [@tedyu](https://github.com/tedyu)) +* Resolves performance regression in client-go discovery clients constructed using `NewDiscoveryClientForConfig` or `NewDiscoveryClientForConfigOrDie`. ([#86168](https://github.com/kubernetes/kubernetes/pull/86168), [@liggitt](https://github.com/liggitt)) +* Make error message and service event message more clear ([#86078](https://github.com/kubernetes/kubernetes/pull/86078), [@feiskyer](https://github.com/feiskyer)) +* e2e-test-framework: add e2e test namespace dump if all tests succeed but the cleanup fails. ([#85542](https://github.com/kubernetes/kubernetes/pull/85542), [@schrodit](https://github.com/schrodit)) +* SafeSysctlWhitelist: add net.ipv4.ping_group_range ([#85463](https://github.com/kubernetes/kubernetes/pull/85463), [@AkihiroSuda](https://github.com/AkihiroSuda)) +* kubelet: the metric process_start_time_seconds be marked as with the ALPHA stability level. ([#85446](https://github.com/kubernetes/kubernetes/pull/85446), [@RainbowMango](https://github.com/RainbowMango)) +* API request throttling (due to a high rate of requests) is now reported in the kubelet (and other component) logs by default. The messages are of the form ([#80649](https://github.com/kubernetes/kubernetes/pull/80649), [@RobertKrawitz](https://github.com/RobertKrawitz)) + * Throttling request took 1.50705208s, request: GET: + * The presence of large numbers of these messages, particularly with long delay times, may indicate to the administrator the need to tune the cluster accordingly. +* Fix API Server potential memory leak issue in processing watch request. ([#85410](https://github.com/kubernetes/kubernetes/pull/85410), [@answer1991](https://github.com/answer1991)) +* Verify kubelet & kube-proxy can recover after being killed on Windows nodes ([#84886](https://github.com/kubernetes/kubernetes/pull/84886), [@YangLu1031](https://github.com/YangLu1031)) +* Fixed an issue that the scheduler only returns the first failure reason. ([#86022](https://github.com/kubernetes/kubernetes/pull/86022), [@Huang-Wei](https://github.com/Huang-Wei)) +* kubectl/drain: add skip-wait-for-delete-timeout option. ([#85577](https://github.com/kubernetes/kubernetes/pull/85577), [@michaelgugino](https://github.com/michaelgugino)) + * If pod DeletionTimestamp older than N seconds, skip waiting for the pod. Seconds must be greater than 0 to skip. +* Following metrics have been turned off: ([#83841](https://github.com/kubernetes/kubernetes/pull/83841), [@RainbowMango](https://github.com/RainbowMango)) + * - kubelet_pod_worker_latency_microseconds + * - kubelet_pod_start_latency_microseconds + * - kubelet_cgroup_manager_latency_microseconds + * - kubelet_pod_worker_start_latency_microseconds + * - kubelet_pleg_relist_latency_microseconds + * - kubelet_pleg_relist_interval_microseconds + * - kubelet_eviction_stats_age_microseconds + * - kubelet_runtime_operations + * - kubelet_runtime_operations_latency_microseconds + * - kubelet_runtime_operations_errors + * - kubelet_device_plugin_registration_count + * - kubelet_device_plugin_alloc_latency_microseconds + * - kubelet_docker_operations + * - kubelet_docker_operations_latency_microseconds + * - kubelet_docker_operations_errors + * - kubelet_docker_operations_timeout + * - network_plugin_operations_latency_microseconds +* - Renamed Kubelet metric certificate_manager_server_expiration_seconds to certificate_manager_server_ttl_seconds and changed to report the second until expiration at read time rather than absolute time of expiry. ([#85874](https://github.com/kubernetes/kubernetes/pull/85874), [@sambdavidson](https://github.com/sambdavidson)) + * - Improved accuracy of Kubelet metric rest_client_exec_plugin_ttl_seconds. +* Bind metadata-agent containers to linux nodes to avoid Windows scheduling on kubernetes cluster includes linux nodes and windows nodes ([#83363](https://github.com/kubernetes/kubernetes/pull/83363), [@wawa0210](https://github.com/wawa0210)) +* Bind metrics-server containers to linux nodes to avoid Windows scheduling on kubernetes cluster includes linux nodes and windows nodes ([#83362](https://github.com/kubernetes/kubernetes/pull/83362), [@wawa0210](https://github.com/wawa0210)) +* During initialization phase (preflight), kubeadm now verifies the presence of the conntrack executable ([#85857](https://github.com/kubernetes/kubernetes/pull/85857), [@hnanni](https://github.com/hnanni)) +* VMSS cache is added so that less chances of VMSS GET throttling ([#85885](https://github.com/kubernetes/kubernetes/pull/85885), [@nilo19](https://github.com/nilo19)) +* Update go-winio module version from 0.4.11 to 0.4.14 ([#85739](https://github.com/kubernetes/kubernetes/pull/85739), [@wawa0210](https://github.com/wawa0210)) +* Fix LoadBalancer rule checking so that no unexpected LoadBalancer updates are made ([#85990](https://github.com/kubernetes/kubernetes/pull/85990), [@feiskyer](https://github.com/feiskyer)) +* kubectl drain node --dry-run will list pods that would be evicted or deleted ([#82660](https://github.com/kubernetes/kubernetes/pull/82660), [@sallyom](https://github.com/sallyom)) +* Windows nodes on GCE can use TPM-based authentication to the master. ([#85466](https://github.com/kubernetes/kubernetes/pull/85466), [@pjh](https://github.com/pjh)) +* kubectl/drain: add disable-eviction option. ([#85571](https://github.com/kubernetes/kubernetes/pull/85571), [@michaelgugino](https://github.com/michaelgugino)) + * Force drain to use delete, even if eviction is supported. This will bypass checking PodDisruptionBudgets, and should be used with caution. +* kubeadm now errors out whenever a not supported component config version is supplied for the kubelet and kube-proxy ([#85639](https://github.com/kubernetes/kubernetes/pull/85639), [@rosti](https://github.com/rosti)) +* Fixed issue with addon-resizer using deprecated extensions APIs ([#85793](https://github.com/kubernetes/kubernetes/pull/85793), [@bskiba](https://github.com/bskiba)) +* Includes FSType when describing CSI persistent volumes. ([#85293](https://github.com/kubernetes/kubernetes/pull/85293), [@huffmanca](https://github.com/huffmanca)) +* kubelet now exports a "server_expiration_renew_failure" and "client_expiration_renew_failure" metric counter if the certificate rotations cannot be performed. ([#84614](https://github.com/kubernetes/kubernetes/pull/84614), [@rphillips](https://github.com/rphillips)) +* kubeadm: don't write the kubelet environment file on "upgrade apply" ([#85412](https://github.com/kubernetes/kubernetes/pull/85412), [@boluisa](https://github.com/boluisa)) +* fix azure file AuthorizationFailure ([#85475](https://github.com/kubernetes/kubernetes/pull/85475), [@andyzhangx](https://github.com/andyzhangx)) +* Resolved regression in admission, authentication, and authorization webhook performance in v1.17.0-rc.1 ([#85810](https://github.com/kubernetes/kubernetes/pull/85810), [@liggitt](https://github.com/liggitt)) +* kubeadm: uses the apiserver AdvertiseAddress IP family to choose the etcd endpoint IP family for non external etcd clusters ([#85745](https://github.com/kubernetes/kubernetes/pull/85745), [@aojea](https://github.com/aojea)) +* kubeadm: Forward cluster name to the controller-manager arguments ([#85817](https://github.com/kubernetes/kubernetes/pull/85817), [@ereslibre](https://github.com/ereslibre)) +* Fixed "requested device X but found Y" attach error on AWS. ([#85675](https://github.com/kubernetes/kubernetes/pull/85675), [@jsafrane](https://github.com/jsafrane)) +* addons: elasticsearch discovery supports IPv6 ([#85543](https://github.com/kubernetes/kubernetes/pull/85543), [@SataQiu](https://github.com/SataQiu)) +* kubeadm: retry `kubeadm-config` ConfigMap creation or mutation if the apiserver is not responding. This will improve resiliency when joining new control plane nodes. ([#85763](https://github.com/kubernetes/kubernetes/pull/85763), [@ereslibre](https://github.com/ereslibre)) +* Update Cluster Autoscaler to 1.17.0; changelog: https://github.com/kubernetes/autoscaler/releases/tag/cluster-autoscaler-1.17.0 ([#85610](https://github.com/kubernetes/kubernetes/pull/85610), [@losipiuk](https://github.com/losipiuk)) +* Filter published OpenAPI schema by making nullable, required fields non-required in order to avoid kubectl to wrongly reject null values. ([#85722](https://github.com/kubernetes/kubernetes/pull/85722), [@sttts](https://github.com/sttts)) +* kubectl set resources will no longer return an error if passed an empty change for a resource. ([#85490](https://github.com/kubernetes/kubernetes/pull/85490), [@sallyom](https://github.com/sallyom)) + * kubectl set subject will no longer return an error if passed an empty change for a resource. +* kube-apiserver: fixed a conflict error encountered attempting to delete a pod with gracePeriodSeconds=0 and a resourceVersion precondition ([#85516](https://github.com/kubernetes/kubernetes/pull/85516), [@michaelgugino](https://github.com/michaelgugino)) +* kubeadm: add a upgrade health check that deploys a Job ([#81319](https://github.com/kubernetes/kubernetes/pull/81319), [@neolit123](https://github.com/neolit123)) +* kubeadm: make sure images are pre-pulled even if a tag did not change but their contents changed ([#85603](https://github.com/kubernetes/kubernetes/pull/85603), [@bart0sh](https://github.com/bart0sh)) +* kube-apiserver: Fixes a bug that hidden metrics can not be enabled by the command-line option `--show-hidden-metrics-for-version`. ([#85444](https://github.com/kubernetes/kubernetes/pull/85444), [@RainbowMango](https://github.com/RainbowMango)) +* kubeadm now supports automatic calculations of dual-stack node cidr masks to kube-controller-manager. ([#85609](https://github.com/kubernetes/kubernetes/pull/85609), [@Arvinderpal](https://github.com/Arvinderpal)) +* Fix bug where EndpointSlice controller would attempt to modify shared objects. ([#85368](https://github.com/kubernetes/kubernetes/pull/85368), [@robscott](https://github.com/robscott)) +* Use context to check client closed instead of http.CloseNotifier in processing watch request which will reduce 1 goroutine for each request if proto is HTTP/2.x . ([#85408](https://github.com/kubernetes/kubernetes/pull/85408), [@answer1991](https://github.com/answer1991)) +* kubeadm: reset raises warnings if it cannot delete folders ([#85265](https://github.com/kubernetes/kubernetes/pull/85265), [@SataQiu](https://github.com/SataQiu)) +* Wait for kubelet & kube-proxy to be ready on Windows node within 10s ([#85228](https://github.com/kubernetes/kubernetes/pull/85228), [@YangLu1031](https://github.com/YangLu1031)) diff --git a/content/en/docs/tasks/administer-cluster/coredns.md b/content/en/docs/tasks/administer-cluster/coredns.md index 657459b14586c..2e50d54f06a2c 100644 --- a/content/en/docs/tasks/administer-cluster/coredns.md +++ b/content/en/docs/tasks/administer-cluster/coredns.md @@ -63,6 +63,10 @@ In Kubernetes 1.11, CoreDNS has graduated to General Availability (GA) and is installed by default. {{< /note >}} +{{< warning >}} +In Kubernetes 1.18, kube-dns usage with kubeadm has been deprecated and will be removed in a future version. +{{< /warning >}} + To install kube-dns on versions prior to 1.13, set the `CoreDNS` feature gate value to `false`: @@ -72,9 +76,9 @@ kubeadm init --feature-gates=CoreDNS=false For versions 1.13 and later, follow the guide outlined [here](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase#cmd-phase-addon). -## Upgrading CoreDNS +## Upgrading CoreDNS -CoreDNS is available in Kubernetes since v1.9. +CoreDNS is available in Kubernetes since v1.9. You can check the version of CoreDNS shipped with Kubernetes and the changes made to CoreDNS [here](https://github.com/coredns/deployment/blob/master/kubernetes/CoreDNS-k8s_version.md). CoreDNS can be upgraded manually in case you want to only upgrade CoreDNS or use your own custom image. diff --git a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md index 8c494935b65ad..0203cfa469f01 100644 --- a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md +++ b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md @@ -248,7 +248,7 @@ linux/amd64, go1.10.3, 2e322f6 ## Known issues -Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). +Some Linux distributions (e.g. Ubuntu) use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces `/etc/resolv.conf` with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet's `--resolv-conf` flag to point to the correct `resolv.conf` (With `systemd-resolved`, this is `/run/systemd/resolve/resolv.conf`). @@ -258,10 +258,10 @@ Kubernetes installs do not configure the nodes' `resolv.conf` files to use the cluster DNS by default, because that process is inherently distribution-specific. This should probably be implemented eventually. -Linux's libc (a.k.a. glibc) has a limit for the DNS `nameserver` records to 3 by default. What's more, for the glibc versions which are older than glic-2.17-222 ([the new versions update see this issue](https://access.redhat.com/solutions/58028)), the DNS `search` records has been limited to 6 ([see this bug from 2005](https://bugzilla.redhat.com/show_bug.cgi?id=168253)). Kubernetes needs to consume 1 `nameserver` record and 3 `search` records. This means that if a local installation already uses 3 `nameserver`s or uses more than 3 `search`es while your glibc versions in the affected list, some of those settings will be lost. For the workaround of the DNS `nameserver` records limit, the node can run `dnsmasq` which will provide more `nameserver` entries, you can also use kubelet's `--resolv-conf` flag. For fixing the DNS `search` records limit, consider upgrading your linux distribution or glibc version. +Linux's libc (a.k.a. glibc) has a limit for the DNS `nameserver` records to 3 by default. What's more, for the glibc versions which are older than glibc-2.17-222 ([the new versions update see this issue](https://access.redhat.com/solutions/58028)), the allowed number of DNS `search` records has been limited to 6 ([see this bug from 2005](https://bugzilla.redhat.com/show_bug.cgi?id=168253)). Kubernetes needs to consume 1 `nameserver` record and 3 `search` records. This means that if a local installation already uses 3 `nameserver`s or uses more than 3 `search`es while your glibc version is in the affected list, some of those settings will be lost. To work around the DNS `nameserver` records limit, the node can run `dnsmasq`, which will provide more `nameserver` entries. You can also use kubelet's `--resolv-conf` flag. To fix the DNS `search` records limit, consider upgrading your linux distribution or upgrading to an unaffected version of glibc. If you are using Alpine version 3.3 or earlier as your base image, DNS may not -work properly owing to a known issue with Alpine. +work properly due to a known issue with Alpine. Check [here](https://github.com/kubernetes/kubernetes/issues/30215) for more information. diff --git a/content/en/docs/tasks/administer-cluster/enabling-endpointslices.md b/content/en/docs/tasks/administer-cluster/enabling-endpointslices.md index 99c575dbfb925..b8e4cf900da31 100644 --- a/content/en/docs/tasks/administer-cluster/enabling-endpointslices.md +++ b/content/en/docs/tasks/administer-cluster/enabling-endpointslices.md @@ -35,28 +35,25 @@ components still rely on Endpoints. For now, enabling EndpointSlices should be seen as an addition to Endpoints in a cluster, not a replacement for them. {{< /note >}} -EndpointSlices are considered a beta feature, but only the API is enabled by -default. Both the EndpointSlice controller and the usage of EndpointSlices by -kube-proxy are not enabled by default. - -The EndpointSlice controller creates and manages EndpointSlices in a cluster. -You can enable it with the `EndpointSlice` [feature -gate](/docs/reference/command-line-tools-reference/feature-gates/) on the {{< -glossary_tooltip text="kube-apiserver" term_id="kube-apiserver" >}} and {{< -glossary_tooltip text="kube-controller-manager" -term_id="kube-controller-manager" >}} (`--feature-gates=EndpointSlice=true`). - -For better scalability, you can also enable this feature gate on {{< -glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}} so EndpointSlices -will be used as the data source instead of Endpoints. +EndpointSlices are a beta feature. Both the API and the EndpointSlice +{{< glossary_tooltip term_id="controller" >}} are enabled by default. +{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}} +uses Endpoints by default, not EndpointSlices. + +For better scalability and performance, you can enable the +`EndpointSliceProxying` +[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) +on kube-proxy. That change +switches the data source to be EndpointSlices, which reduces the amount of +Kubernetes API traffic to and from kube-proxy. ## Using EndpointSlices With EndpointSlices fully enabled in your cluster, you should see corresponding EndpointSlice resources for each Endpoints resource. In addition to supporting -existing Endpoints functionality, EndpointSlices should include new bits of -information such as topology. They will allow for greater scalability and -extensibility of network endpoints in your cluster. +existing Endpoints functionality, EndpointSlices include new bits of information +such as topology. They will allow for greater scalability and extensibility of +network endpoints in your cluster. {{% capture whatsnext %}} diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md b/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md new file mode 100644 index 0000000000000..54978dc55dfb4 --- /dev/null +++ b/content/en/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes.md @@ -0,0 +1,169 @@ +--- +reviewers: +- michmike +- patricklang +title: Adding Windows nodes +min-kubernetes-server-version: 1.17 +content_template: templates/tutorial +weight: 30 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.18" state="beta" >}} + +You can use Kubernetes to run a mixture of Linux and Windows nodes, so you can mix Pods that run on Linux on with Pods that run on Windows. This page shows how to register Windows nodes to your cluster. + +{{% /capture %}} + + +{{% capture prerequisites %}} {{< version-check >}} + +* Obtain a [Windows Server 2019 license](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing) +(or higher) in order to configure the Windows node that hosts Windows containers. +If you are using VXLAN/Overlay networking you must have also have [KB4489899](https://support.microsoft.com/help/4489899) installed. + +* A Linux-based Kubernetes kubeadm cluster in which you have access to the control plane (see [Creating a single control-plane cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)). + +{{% /capture %}} + + +{{% capture objectives %}} + +* Register a Windows node to the cluster +* Configure networking so Pods and Services on Linux and Windows can communicate with each other + +{{% /capture %}} + + +{{% capture lessoncontent %}} + +## Getting Started: Adding a Windows Node to Your Cluster + +### Networking Configuration + +Once you have a Linux-based Kubernetes control-plane node you are ready to choose a networking solution. This guide illustrates using Flannel in VXLAN mode for simplicity. + +#### Configuring Flannel + +1. Prepare Kubernetes control plane for Flannel + + Some minor preparation is recommended on the Kubernetes control plane in our cluster. It is recommended to enable bridged IPv4 traffic to iptables chains when using Flannel. This can be done using the following command: + + ```bash + sudo sysctl net.bridge.bridge-nf-call-iptables=1 + ``` + +1. Download & configure Flannel for Linux + + Download the most recent Flannel manifest: + + ```bash + wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml + ``` + + Modify the `net-conf.json` section of the flannel manifest in order to set the VNI to 4096 and the Port to 4789. It should look as follows: + + ```json + net-conf.json: | + { + "Network": "10.244.0.0/16", + "Backend": { + "Type": "vxlan", + "VNI" : 4096, + "Port": 4789 + } + } + ``` + + {{< note >}}The VNI must be set to 4096 and port 4789 for Flannel on Linux to interoperate with Flannel on Windows. See the [VXLAN documentation](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan). + for an explanation of these fields.{{< /note >}} + + {{< note >}}To use L2Bridge/Host-gateway mode instead change the value of `Type` to `"host-gw"` and omit `VNI` and `Port`.{{< /note >}} + +1. Apply the Flannel manifest and validate + + Let's apply the Flannel configuration: + + ```bash + kubectl apply -f kube-flannel.yml + ``` + + After a few minutes, you should see all the pods as running if the Flannel pod network was deployed. + + ```bash + kubectl get pods -n kube-system + ``` + + The output should include the Linux flannel DaemonSet as running: + + ``` + NAMESPACE NAME READY STATUS RESTARTS AGE + ... + kube-system kube-flannel-ds-54954 1/1 Running 0 1m + ``` + +1. Add Windows Flannel and kube-proxy DaemonSets + + Now you can add Windows-compatible versions of Flannel and kube-proxy. In order + to ensure that you get a compatible version of kube-proxy, you'll need to substitute + the tag of the image. The following example shows usage for Kubernetes {{< param "fullversion" >}}, + but you should adjust the version for your own deployment. + + ```bash + curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/kube-proxy.yml | sed 's/VERSION/{{< param "fullversion" >}}/g' | kubectl apply -f - + kubectl apply -f https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-overlay.yml + ``` + + {{< note >}} + If you're using host-gateway use https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-host-gw.yml instead + {{< /note >}} + +### Joining a Windows worker node +{{< note >}} +You must install the `Containers` feature and install Docker. Instructions +to do so are available at [Install Docker Engine - Enterprise on Windows Servers](https://docs.docker.com/ee/docker-ee/windows/docker-ee/#install-docker-engine---enterprise). +{{< /note >}} + +{{< note >}} +All code snippets in Windows sections are to be run in a PowerShell environment +with elevated permissions (Administrator) on the Windows worker node. +{{< /note >}} + +1. Install wins, kubelet, and kubeadm. + + ```PowerShell + curl.exe -LO https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/PrepareNode.ps1 + .\PrepareNode.ps1 -KubernetesVersion {{< param "fullversion" >}} + ``` + +1. Run `kubeadm` to join the node + + Use the command that was given to you when you ran `kubeadm init` on a control plane host. + If you no longer have this command, or the token has expired, you can run `kubeadm token create --print-join-command` + (on a control plane host) to generate a new token and join command. + + +#### Verifying your installation +You should now be able to view the Windows node in your cluster by running: + +```bash +kubectl get nodes -o wide +``` + +If your new node is in the `NotReady` state it is likely because the flannel image is still downloading. +You can check the progress as before by checking on the flannel pods in the `kube-system` namespace: + +```shell +kubectl -n kube-system get pods -l app=flannel +``` + +Once the flannel Pod is running, your node should enter the `Ready` state and then be available to handle workloads. + +{{% /capture %}} + +{{% capture whatsnext %}} + +- [Upgrading Windows kubeadm nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes) + +{{% /capture %}} diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index c3ef0caa10ecc..6329c4a3959b4 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -3,6 +3,7 @@ reviewers: - sig-cluster-lifecycle title: Certificate Management with kubeadm content_template: templates/task +weight: 10 --- {{% capture overview %}} diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index ce898371fbd82..9fc79c1e12b1f 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -3,16 +3,19 @@ reviewers: - sig-cluster-lifecycle title: Upgrading kubeadm clusters content_template: templates/task +weight: 20 +min-kubernetes-server-version: 1.18 --- {{% capture overview %}} This page explains how to upgrade a Kubernetes cluster created with kubeadm from version -1.16.x to version 1.17.x, and from version 1.17.x to 1.17.y (where `y > x`). +1.17.x to version 1.18.x, and from version 1.18.x to 1.18.y (where `y > x`). To see information about upgrading clusters created using older versions of kubeadm, please refer to following pages instead: +- [Upgrading kubeadm cluster from 1.16 to 1.17](https://v1-17.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [Upgrading kubeadm cluster from 1.15 to 1.16](https://v1-16.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [Upgrading kubeadm cluster from 1.14 to 1.15](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/) - [Upgrading kubeadm cluster from 1.13 to 1.14](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/) @@ -27,7 +30,7 @@ The upgrade workflow at high level is the following: {{% capture prerequisites %}} -- You need to have a kubeadm Kubernetes cluster running version 1.16.0 or later. +- You need to have a kubeadm Kubernetes cluster running version 1.17.0 or later. - [Swap must be disabled](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux). - The cluster should use a static control plane and etcd pods or external etcd. - Make sure you read the [release notes]({{< latest-release-notes >}}) carefully. @@ -54,12 +57,12 @@ The upgrade workflow at high level is the following: apt update apt-cache madison kubeadm # find the latest 1.17 version in the list - # it should look like 1.17.x-00, where x is the latest patch + # it should look like 1.18.x-00, where x is the latest patch {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} yum list --showduplicates kubeadm --disableexcludes=kubernetes # find the latest 1.17 version in the list - # it should look like 1.17.x-0, where x is the latest patch + # it should look like 1.18.x-0, where x is the latest patch {{% /tab %}} {{< /tabs >}} @@ -71,18 +74,18 @@ The upgrade workflow at high level is the following: {{< tabs name="k8s_install_kubeadm_first_cp" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.17.x-00 with the latest patch version + # replace x in 1.18.x-00 with the latest patch version apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm=1.17.x-00 && \ + apt-get update && apt-get install -y kubeadm=1.18.x-00 && \ apt-mark hold kubeadm # since apt-get version 1.1 you can also use the following method apt-get update && \ - apt-get install -y --allow-change-held-packages kubeadm=1.17.x-00 + apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00 {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.17.x-0 with the latest patch version - yum install -y kubeadm-1.17.x-0 --disableexcludes=kubernetes + # replace x in 1.18.x-0 with the latest patch version + yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} @@ -112,28 +115,30 @@ The upgrade workflow at high level is the following: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. - [upgrade] Making sure the cluster is healthy: + [upgrade] Running cluster health checks [upgrade] Fetching available versions to upgrade to - [upgrade/versions] Cluster version: v1.16.0 - [upgrade/versions] kubeadm version: v1.17.0 + [upgrade/versions] Cluster version: v1.17.3 + [upgrade/versions] kubeadm version: v1.18.0 + [upgrade/versions] Latest stable version: v1.18.0 + [upgrade/versions] Latest version in the v1.17 series: v1.18.0 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': - COMPONENT CURRENT AVAILABLE - Kubelet 1 x v1.16.0 v1.17.0 + COMPONENT CURRENT AVAILABLE + Kubelet 1 x v1.17.3 v1.18.0 - Upgrade to the latest version in the v1.16 series: + Upgrade to the latest version in the v1.17 series: COMPONENT CURRENT AVAILABLE - API Server v1.16.0 v1.17.0 - Controller Manager v1.16.0 v1.17.0 - Scheduler v1.16.0 v1.17.0 - Kube Proxy v1.16.0 v1.17.0 - CoreDNS 1.6.2 1.6.5 - Etcd 3.3.15 3.4.3-0 + API Server v1.17.3 v1.18.0 + Controller Manager v1.17.3 v1.18.0 + Scheduler v1.17.3 v1.18.0 + Kube Proxy v1.17.3 v1.18.0 + CoreDNS 1.6.5 1.6.7 + Etcd 3.4.3 3.4.3-0 You can now apply the upgrade by executing the following command: - kubeadm upgrade apply v1.17.0 + kubeadm upgrade apply v1.18.0 _____________________________________________________________________ ``` @@ -150,78 +155,79 @@ The upgrade workflow at high level is the following: ```shell # replace x with the patch version you picked for this upgrade - sudo kubeadm upgrade apply v1.17.x + sudo kubeadm upgrade apply v1.18.x ``` You should see output similar to this: ``` - [preflight] Running pre-flight checks. - [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' - [upgrade/version] You have chosen to change the cluster version to "v1.17.0" - [upgrade/versions] Cluster version: v1.16.0 - [upgrade/versions] kubeadm version: v1.17.0 + [preflight] Running pre-flight checks. + [upgrade] Running cluster health checks + [upgrade/version] You have chosen to change the cluster version to "v1.18.0" + [upgrade/versions] Cluster version: v1.17.3 + [upgrade/versions] kubeadm version: v1.18.0 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-controller-manager. [upgrade/prepull] Prepulling image for component kube-scheduler. - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler + [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler + [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd + [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [upgrade/prepull] Prepulled image for component etcd. - [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Prepulled image for component kube-apiserver. + [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Successfully prepulled the images for all the control plane components - [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.17.0"... - Static pod: kube-apiserver-luboitvbox hash: 8d931c2296a38951e95684cbcbe3b923 - Static pod: kube-controller-manager-luboitvbox hash: 2480bf6982ad2103c05f6764e20f2787 - Static pod: kube-scheduler-luboitvbox hash: 9b290132363a92652555896288ca3f88 + [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.0"... + Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46 + Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18 + Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366 [upgrade/etcd] Upgrading to TLS for etcd - [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests446257614" + [upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.0" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue + [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests308527012" + W0308 18:48:14.535122 3082 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade - [upgrade/staticpods] Renewing "apiserver-etcd-client" certificate - [upgrade/staticpods] Renewing "apiserver" certificate - [upgrade/staticpods] Renewing "apiserver-kubelet-client" certificate - [upgrade/staticpods] Renewing "front-proxy-client" certificate - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-05-23-38-03/kube-apiserver.yaml" + [upgrade/staticpods] Renewing apiserver certificate + [upgrade/staticpods] Renewing apiserver-kubelet-client certificate + [upgrade/staticpods] Renewing front-proxy-client certificate + [upgrade/staticpods] Renewing apiserver-etcd-client certificate + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-apiserver-luboitvbox hash: 8d931c2296a38951e95684cbcbe3b923 - Static pod: kube-apiserver-luboitvbox hash: 1b4e2b09a408c844f9d7b535e593ead9 + Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46 + Static pod: kube-apiserver-myhost hash: 609429acb0d71dce6725836dd97d8bf4 [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade - [upgrade/staticpods] Renewing certificate embedded in "controller-manager.conf" - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-05-23-38-03/kube-controller-manager.yaml" + [upgrade/staticpods] Renewing controller-manager.conf certificate + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-controller-manager-luboitvbox hash: 2480bf6982ad2103c05f6764e20f2787 - Static pod: kube-controller-manager-luboitvbox hash: 6617d53423348aa619f1d6e568bb894a + Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18 + Static pod: kube-controller-manager-myhost hash: c7a1232ba2c5dc15641c392662fe5156 [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade - [upgrade/staticpods] Renewing certificate embedded in "scheduler.conf" - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-05-23-38-03/kube-scheduler.yaml" + [upgrade/staticpods] Renewing scheduler.conf certificate + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-scheduler-luboitvbox hash: 9b290132363a92652555896288ca3f88 - Static pod: kube-scheduler-luboitvbox hash: edf58ab819741a5d1eb9c33de756e3ca + Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366 + Static pod: kube-scheduler-myhost hash: b1b721486ae0ac504c160dcdc457ab0d [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! - [upgrade/staticpods] Renewing certificate embedded in "admin.conf" [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace - [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster - [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace + [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster + [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token @@ -229,7 +235,7 @@ The upgrade workflow at high level is the following: [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy - [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.0". Enjoy! + [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.0". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. ``` @@ -271,18 +277,18 @@ Also `sudo kubeadm upgrade plan` is not needed. {{< tabs name="k8s_install_kubelet" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.17.x-00 with the latest patch version + # replace x in 1.18.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ - apt-get update && apt-get install -y kubelet=1.17.x-00 kubectl=1.17.x-00 && \ + apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \ apt-mark hold kubelet kubectl # since apt-get version 1.1 you can also use the following method apt-get update && \ - apt-get install -y --allow-change-held-packages kubelet=1.17.x-00 kubectl=1.17.x-00 + apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00 {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.17.x-0 with the latest patch version - yum install -y kubelet-1.17.x-0 kubectl-1.17.x-0 --disableexcludes=kubernetes + # replace x in 1.18.x-0 with the latest patch version + yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} @@ -303,18 +309,18 @@ without compromising the minimum required capacity for running your workloads. {{< tabs name="k8s_install_kubeadm_worker_nodes" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.17.x-00 with the latest patch version + # replace x in 1.18.x-00 with the latest patch version apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm=1.17.x-00 && \ + apt-get update && apt-get install -y kubeadm=1.18.x-00 && \ apt-mark hold kubeadm # since apt-get version 1.1 you can also use the following method apt-get update && \ - apt-get install -y --allow-change-held-packages kubeadm=1.17.x-00 + apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00 {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.17.x-0 with the latest patch version - yum install -y kubeadm-1.17.x-0 --disableexcludes=kubernetes + # replace x in 1.18.x-0 with the latest patch version + yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} @@ -349,18 +355,18 @@ without compromising the minimum required capacity for running your workloads. {{< tabs name="k8s_kubelet_and_kubectl" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.17.x-00 with the latest patch version + # replace x in 1.18.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ - apt-get update && apt-get install -y kubelet=1.17.x-00 kubectl=1.17.x-00 && \ + apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \ apt-mark hold kubelet kubectl # since apt-get version 1.1 you can also use the following method apt-get update && \ - apt-get install -y --allow-change-held-packages kubelet=1.17.x-00 kubectl=1.17.x-00 + apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00 {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.17.x-0 with the latest patch version - yum install -y kubelet-1.17.x-0 kubectl-1.17.x-0 --disableexcludes=kubernetes + # replace x in 1.18.x-0 with the latest patch version + yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} @@ -375,7 +381,7 @@ without compromising the minimum required capacity for running your workloads. 1. Bring the node back online by marking it schedulable: ```shell - # replace with the name of your node + # replace with the name of your node kubectl uncordon ``` diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md b/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md new file mode 100644 index 0000000000000..a6c626a627799 --- /dev/null +++ b/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes.md @@ -0,0 +1,93 @@ +--- +title: Upgrading Windows nodes +min-kubernetes-server-version: 1.17 +content_template: templates/task +weight: 40 +--- + +{{% capture overview %}} + +{{< feature-state for_k8s_version="v1.18" state="beta" >}} + +This page explains how to upgrade a Windows node [created with kubeadm](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes). + +{{% /capture %}} + + +{{% capture prerequisites %}} +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +* Familiarize yourself with [the process for upgrading the rest of your kubeadm +cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade). You will want to +upgrade the control plane nodes before upgrading your Windows nodes. + +{{% /capture %}} + + +{{% capture steps %}} + +## Upgrading worker nodes + +### Upgrade kubeadm + +1. From the Windows node, upgrade kubeadm: + + ```powershell + # replace {{< param "fullversion" >}} with your desired version + curl.exe -Lo C:\k\kubeadm.exe https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubeadm.exe + ``` + +### Drain the node + +1. From a machine with access to the Kubernetes API, + prepare the node for maintenance by marking it unschedulable and evicting the workloads: + + ```shell + # replace with the name of your node you are draining + kubectl drain --ignore-daemonsets + ``` + + You should see output similar to this: + + ``` + node/ip-172-31-85-18 cordoned + node/ip-172-31-85-18 drained + ``` + +### Upgrade the kubelet configuration + +1. From the Windows node, call the following command to sync new kubelet configuration: + + ```powershell + kubeadm upgrade node + ``` + +### Upgrade kubelet + +1. From the Windows node, upgrade and restart the kubelet: + + ```powershell + stop-service kubelet + curl.exe -Lo C:\k\kubelet.exe https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubelet.exe + restart-service kubelet + ``` + +### Uncordon the node + +1. From a machine with access to the Kubernetes API, +bring the node back online by marking it schedulable: + + ```shell + # replace with the name of your node + kubectl uncordon + ``` +### Upgrade kube-proxy + +1. From a machine with access to the Kubernetes API, run the following, +again replacing {{< param "fullversion" >}} with your desired version: + + ```shell + curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/kube-proxy.yml | sed 's/VERSION/{{< param "fullversion" >}}/g' | kubectl apply -f - + ``` + + +{{% /capture %}} diff --git a/content/en/docs/tasks/administer-cluster/nodelocaldns.md b/content/en/docs/tasks/administer-cluster/nodelocaldns.md index f23525e377433..6502ce14728b0 100644 --- a/content/en/docs/tasks/administer-cluster/nodelocaldns.md +++ b/content/en/docs/tasks/administer-cluster/nodelocaldns.md @@ -8,7 +8,7 @@ content_template: templates/task --- {{% capture overview %}} -{{< feature-state for_k8s_version="v1.15" state="beta" >}} +{{< feature-state for_k8s_version="v1.18" state="stable" >}} This page provides an overview of NodeLocal DNSCache feature in Kubernetes. {{% /capture %}} diff --git a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md index 39d0e825b8d89..e82b55583ba1d 100644 --- a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md +++ b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md @@ -5,6 +5,7 @@ reviewers: - dashpole title: Reserve Compute Resources for System Daemons content_template: templates/task +min-kubernetes-server-version: 1.8 --- {{% capture overview %}} @@ -27,6 +28,9 @@ on each node. {{% capture prerequisites %}} {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +Your Kubernetes server must be at or later than version 1.17 to use +the kubelet command line option `--reserved-cpus` to set an +[explicitly reserved CPU list](#explicitly-reserved-cpu-list). {{% /capture %}} @@ -146,9 +150,9 @@ exist. Kubelet will fail if an invalid cgroup is specified. - **Kubelet Flag**: `--reserved-cpus=0-3` `reserved-cpus` is meant to define an explicit CPU set for OS system daemons and -kubernetes system daemons. This option is added in 1.17 release. `reserved-cpus` -is for systems that do not intent to define separate top level cgroups for -OS system daemons and kubernetes system daemons with regard to cpuset resource. +kubernetes system daemons. `reserved-cpus` is for systems that do not intend to +define separate top level cgroups for OS system daemons and kubernetes system daemons +with regard to cpuset resource. If the Kubelet **does not** have `--system-reserved-cgroup` and `--kube-reserved-cgroup`, the explicit cpuset provided by `reserved-cpus` will take precedence over the CPUs defined by `--kube-reserved` and `--system-reserved` options. @@ -247,36 +251,4 @@ If `kube-reserved` and/or `system-reserved` is not enforced and system daemons exceed their reservation, `kubelet` evicts pods whenever the overall node memory usage is higher than `31.5Gi` or `storage` is greater than `90Gi` -## Feature Availability - -As of Kubernetes version 1.2, it has been possible to **optionally** specify -`kube-reserved` and `system-reserved` reservations. The scheduler switched to -using `Allocatable` instead of `Capacity` when available in the same release. - -As of Kubernetes version 1.6, `eviction-thresholds` are being considered by -computing `Allocatable`. To revert to the old behavior set -`--experimental-allocatable-ignore-eviction` kubelet flag to `true`. - -As of Kubernetes version 1.6, `kubelet` enforces `Allocatable` on pods using -control groups. To revert to the old behavior unset `--enforce-node-allocatable` -kubelet flag. Note that unless `--kube-reserved`, or `--system-reserved` or -`--eviction-hard` flags have non-default values, `Allocatable` enforcement does -not affect existing deployments. - -As of Kubernetes version 1.6, `kubelet` launches pods in their own cgroup -sandbox in a dedicated part of the cgroup hierarchy it manages. Operators are -required to drain their nodes prior to upgrade of the `kubelet` from prior -versions in order to ensure pods and their associated containers are launched in -the proper part of the cgroup hierarchy. - -As of Kubernetes version 1.7, `kubelet` supports specifying `storage` as a resource -for `kube-reserved` and `system-reserved`. - -As of Kubernetes version 1.8, the `storage` key name was changed to `ephemeral-storage` -for the alpha release. - -As of Kubernetes version 1.17, you can optionally specify -explicit cpuset by `reserved-cpus` as CPUs reserved for OS system -daemons/interrupts/timers and Kubernetes daemons. - {{% /capture %}} diff --git a/content/en/docs/tasks/administer-cluster/topology-manager.md b/content/en/docs/tasks/administer-cluster/topology-manager.md index 382038a3a54d7..2e37830e411b8 100644 --- a/content/en/docs/tasks/administer-cluster/topology-manager.md +++ b/content/en/docs/tasks/administer-cluster/topology-manager.md @@ -8,11 +8,12 @@ reviewers: - nolancon content_template: templates/task +min-kubernetes-server-version: v1.18 --- {{% capture overview %}} -{{< feature-state state="alpha" >}} +{{< feature-state state="beta" >}} An increasing number of systems leverage a combination of CPUs and hardware accelerators to support latency-critical execution and high-throughput parallel computation. These include workloads in fields such as telecommunications, scientific computing, machine learning, financial services and data analytics. Such hybrid systems comprise a high performance environment. @@ -44,6 +45,10 @@ The Topology manager receives Topology information from the *Hint Providers* as The selected hint is stored as part of the Topology Manager. Depending on the policy configured the pod can be accepted or rejected from the node based on the selected hint. The hint is then stored in the Topology Manager for use by the *Hint Providers* when making the resource allocation decisions. +### Enable the Topology Manager feature + +Support for the Topology Manager requires `TopologyManager` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled. It is enabled by default starting with Kubernetes 1.18. + ### Topology Manager Policies The Topology Manager currently: @@ -176,12 +181,10 @@ In the case of the `BestEffort` pod the CPU Manager would send back the default Using this information the Topology Manager calculates the optimal hint for the pod and stores this information, which will be used by the Hint Providers when they are making their resource assignments. ### Known Limitations -1. As of K8s 1.16 the Topology Manager is currently only guaranteed to work if a *single* container in the pod spec requires aligned resources. This is due to the hint generation being based on current resource allocations, and all containers in a pod generate hints before any resource allocation has been made. This results in unreliable hints for all but the first container in a pod. -*Due to this limitation if multiple pods/containers are considered by Kubelet in quick succession they may not respect the Topology Manager policy. - -2. The maximum number of NUMA nodes that Topology Manager will allow is 8, past this there will be a state explosion when trying to enumerate the possible NUMA affinities and generating their hints. +1. The maximum number of NUMA nodes that Topology Manager allows is 8. With more than 8 NUMA nodes there will be a state explosion when trying to enumerate the possible NUMA affinities and generating their hints. -3. The scheduler is not topology-aware, so it is possible to be scheduled on a node and then fail on the node due to the Topology Manager. +2. The scheduler is not topology-aware, so it is possible to be scheduled on a node and then fail on the node due to the Topology Manager. +3. The Device Manager and the CPU Manager are the only components to adopt the Topology Manager's HintProvider interface. This means that NUMA alignment can only be achieved for resources managed by the CPU Manager and the Device Manager. Memory or Hugepages are not considered by the Topology Manager for NUMA alignment. {{% /capture %}} diff --git a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md new file mode 100644 index 0000000000000..ded131d6103dd --- /dev/null +++ b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md @@ -0,0 +1,120 @@ +--- +title: Assign Pods to Nodes using Node Affinity +min-kubernetes-server-version: v1.10 +content_template: templates/task +weight: 120 +--- + +{{% capture overview %}} +This page shows how to assign a Kubernetes Pod to a particular node using Node Affinity in a +Kubernetes cluster. +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + +## Add a label to a node + +1. List the nodes in your cluster, along with their labels: + + ```shell + kubectl get nodes --show-labels + ``` + The output is similar to this: + + ```shell + NAME STATUS ROLES AGE VERSION LABELS + worker0 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker0 + worker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1 + worker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2 + ``` +1. Chose one of your nodes, and add a label to it: + + ```shell + kubectl label nodes disktype=ssd + ``` + where `` is the name of your chosen node. + +1. Verify that your chosen node has a `disktype=ssd` label: + + ```shell + kubectl get nodes --show-labels + ``` + + The output is similar to this: + + ``` + NAME STATUS ROLES AGE VERSION LABELS + worker0 Ready 1d v1.13.0 ...,disktype=ssd,kubernetes.io/hostname=worker0 + worker1 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker1 + worker2 Ready 1d v1.13.0 ...,kubernetes.io/hostname=worker2 + ``` + + In the preceding output, you can see that the `worker0` node has a + `disktype=ssd` label. + +## Schedule a Pod using required node affinity + +This manifest describes a Pod that has a `requiredDuringSchedulingIgnoredDuringExecution` node affinity,`disktype: ssd`. +This means that the pod will get scheduled only on a node that has a `disktype=ssd` label. + +{{< codenew file="pods/pod-nginx-required-affinity.yaml" >}} + +1. Apply the manifest to create a Pod that is scheduled onto your + chosen node: + + ```shell + kubectl apply -f https://k8s.io/examples/pods/pod-nginx-required-affinity.yaml + ``` + +1. Verify that the pod is running on your chosen node: + + ```shell + kubectl get pods --output=wide + ``` + + The output is similar to this: + + ``` + NAME READY STATUS RESTARTS AGE IP NODE + nginx 1/1 Running 0 13s 10.200.0.4 worker0 + ``` + +## Schedule a Pod using preferred node affinity + +This manifest describes a Pod that has a `preferredDuringSchedulingIgnoredDuringExecution` node affinity,`disktype: ssd`. +This means that the pod will prefer a node that has a `disktype=ssd` label. + +{{< codenew file="pods/pod-nginx-preferred-affinity.yaml" >}} + +1. Apply the manifest to create a Pod that is scheduled onto your + chosen node: + + ```shell + kubectl apply -f https://k8s.io/examples/pods/pod-nginx-preferred-affinity.yaml + ``` + +1. Verify that the pod is running on your chosen node: + + ```shell + kubectl get pods --output=wide + ``` + + The output is similar to this: + + ``` + NAME READY STATUS RESTARTS AGE IP NODE + nginx 1/1 Running 0 13s 10.200.0.4 worker0 + ``` + +{{% /capture %}} + +{{% capture whatsnext %}} +Learn more about +[Node Affinity](/docs/concepts/configuration/assign-pod-node/#node-affinity). +{{% /capture %}} diff --git a/content/en/docs/tasks/configure-pod-container/configure-gmsa.md b/content/en/docs/tasks/configure-pod-container/configure-gmsa.md index ff47a7dd8c7ef..83d9dee596a02 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-gmsa.md +++ b/content/en/docs/tasks/configure-pod-container/configure-gmsa.md @@ -6,7 +6,7 @@ weight: 20 {{% capture overview %}} -{{< feature-state for_k8s_version="v1.16" state="beta" >}} +{{< feature-state for_k8s_version="v1.18" state="stable" >}} This page shows how to configure [Group Managed Service Accounts](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) (GMSA) for Pods and containers that will run on Windows nodes. Group Managed Service Accounts are a specific type of Active Directory account that provides automatic password management, simplified service principal name (SPN) management, and the ability to delegate the management to other administrators across multiple servers. @@ -18,9 +18,6 @@ In Kubernetes, GMSA credential specs are configured at a Kubernetes cluster-wide You need to have a Kubernetes cluster and the `kubectl` command-line tool must be configured to communicate with your cluster. The cluster is expected to have Windows worker nodes. This section covers a set of initial steps required once for each cluster: -### WindowsGMSA feature gate -The `WindowsGMSA` feature gate (required to pass down GMSA credential specs from the pod specs to the container runtime) is enabled by default on the API server and the kubelet. See [Feature Gates](/docs/reference/command-line-tools-reference/feature-gates/) for an explanation of enabling or disabling feature gates. - ### Install the GMSACredentialSpec CRD A [CustomResourceDefinition](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/)(CRD) for GMSA credential spec resources needs to be configured on the cluster to define the custom resource type `GMSACredentialSpec`. Download the GMSA CRD [YAML](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/gmsa-crd.yml) and save it as gmsa-crd.yaml. Next, install the CRD with `kubectl apply -f gmsa-crd.yaml` @@ -42,7 +39,7 @@ Installing the above webhooks and associated objects require the steps below: 1. Create the validating and mutating webhook configurations referring to the deployment. -A [script](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/deploy-gmsa-webhook.sh) can be used to deploy and configure the GMSA webhooks and associated objects mentioned above. The script can be run with a ```--dry-run``` option to allow you to review the changes that would be made to your cluster. +A [script](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/deploy-gmsa-webhook.sh) can be used to deploy and configure the GMSA webhooks and associated objects mentioned above. The script can be run with a ```--dry-run=server``` option to allow you to review the changes that would be made to your cluster. The [YAML template](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/gmsa-webhook.yml.tpl) used by the script may also be used to deploy the webhooks and associated objects manually (with appropriate substitutions for the parameters) diff --git a/content/en/docs/tasks/configure-pod-container/configure-runasusername.md b/content/en/docs/tasks/configure-pod-container/configure-runasusername.md index 8fecb6535c1d8..530666e9dc12c 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-runasusername.md +++ b/content/en/docs/tasks/configure-pod-container/configure-runasusername.md @@ -6,13 +6,9 @@ weight: 20 {{% capture overview %}} -{{< feature-state for_k8s_version="v1.17" state="beta" >}} +{{< feature-state for_k8s_version="v1.18" state="stable" >}} -This page shows how to enable and use the `RunAsUserName` feature for pods and containers that will run on Windows nodes. This feature is meant to be the Windows equivalent of the Linux-specific `runAsUser` feature, allowing users to run the container entrypoints with a different username that their default ones. - -{{< note >}} -This feature is in beta. The overall functionality for `RunAsUserName` will not change, but there may be some changes regarding the username validation. -{{< /note >}} +This page shows how to use the `runAsUserName` setting for Pods and containers that will run on Windows nodes. This is roughly equivalent of the Linux-specific `runAsUser` setting, allowing you to run applications in a container as a different username than the default. {{% /capture %}} @@ -60,7 +56,6 @@ The output should be: ContainerUser ``` - ## Set the Username for a Container To specify the username with which to execute a Container's processes, include the `securityContext` field ([SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core)) in the Container manifest, and within it, the `windowsOptions` ([WindowsSecurityContextOptions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#windowssecuritycontextoptions-v1-core) field containing the `runAsUserName` field. diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index f4917b36a91a8..a86ae91aca960 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -299,9 +299,67 @@ token available to the pod at a configurable file path, and refresh the token as The application is responsible for reloading the token when it rotates. Periodic reloading (e.g. once every 5 minutes) is sufficient for most usecases. +## Service Account Issuer Discovery + +{{< feature-state for_k8s_version="v1.18" state="alpha" >}} + +The Service Account Issuer Discovery feature is enabled by enabling the +`ServiceAccountIssuerDiscovery` [feature gate](/docs/reference/command-line-tools-reference/feature) +and then enabling the Service Account Token Projection feature as described +[above](#service-account-token-volume-projection). + +{{< note >}} +The issuer URL must comply with the +[OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html). In +practice, this means it must use the `https` scheme, and should serve an OpenID +provider configuration at `{service-account-issuer}/.well-known/openid-configuration`. + +If the URL does not comply, the `ServiceAccountIssuerDiscovery` endpoints will +not be registered, even if the feature is enabled. +{{< /note >}} + +The Service Account Issuer Discovery feature enables federation of Kubernetes +service account tokens issued by a cluster (the _identity provider_) with +external systems (_relying parties_). + +When enabled, the Kubernetes API server provides an OpenID Provider +Configuration document at `/.well-known/openid-configuration` and the associated +JSON Web Key Set (JWKS) at `/openid/v1/jwks`. The OpenID Provider Configuration +is sometimes referred to as the _discovery document_. + +When enabled, the cluster is also configured with a default RBAC ClusterRole +called `system:service-account-issuer-discovery`. No role bindings are provided +by default. Administrators may, for example, choose whether to bind the role to +`system:authenticated` or `system:unauthenticated` depending on their security +requirements and which external systems they intend to federate with. + +{{< note >}} +The responses served at `/.well-known/openid-configuration` and +`/openid/v1/jwks` are designed to be OIDC compatible, but not strictly OIDC +compliant. Those documents contain only the parameters necessary to perform +validation of Kubernetes service account tokens. +{{< /note >}} + +The JWKS response contains public keys that a relying party can use to validate +the Kubernetes service account tokens. Relying parties first query for the +OpenID Provider Configuration, and use the `jwks_uri` field in the response to +find the JWKS. + +In many cases, Kubernetes API servers are not available on the public internet, +but public endpoints that serve cached responses from the API server can be made +available by users or service providers. In these cases, it is possible to +override the `jwks_uri` in the OpenID Provider Configuration so that it points +to the public endpoint, rather than the API server's address, by passing the +`--service-account-jwks-uri` flag to the API server. Like the issuer URL, the +JWKS URI is required to use the `https` scheme. {{% /capture %}} {{% capture whatsnext %}} -See also the -[Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/). + +See also: + +- [Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/) +- [Service Account Signing Key Retrieval KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/20190730-oidc-discovery.md) +- [OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html) + {{% /capture %}} diff --git a/content/en/docs/tasks/configure-pod-container/security-context.md b/content/en/docs/tasks/configure-pod-container/security-context.md index bc1fc3a8272bf..038fbcb97fb48 100644 --- a/content/en/docs/tasks/configure-pod-container/security-context.md +++ b/content/en/docs/tasks/configure-pod-container/security-context.md @@ -140,6 +140,45 @@ Exit your shell: exit ``` +## Configure volume permission and ownership change policy for Pods + +{{< feature-state for_k8s_version="v1.18" state="alpha" >}} + +By default, Kubernetes recursively changes ownership and permissions for the contents of each +volume to match the `fsGroup` specified in a Pod's `securityContext` when that volume is +mounted. +For large volumes, checking and changing ownership and permissions can take a lot of time, +slowing Pod startup. You can use the `fsGroupChangePolicy` field inside a `securityContext` +to control the way that Kubernetes checks and manages ownership and permissions +for a volume. + +**fsGroupChangePolicy** - `fsGroupChangePolicy` defines behavior for changing ownership and permission of the volume +before being exposed inside a Pod. This field only applies to volume types that support +`fsGroup` controlled ownership and permissions. This field has two possible values: + +* _OnRootMismatch_: Only change permissions and ownership if permission and ownership of root directory does not match with expected permissions of the volume. This could help shorten the time it takes to change ownership and permission of a volume. +* _Always_: Always change permission and ownership of the volume when volume is mounted. + +For example: + +```yaml +securityContext: + runAsUser: 1000 + runAsGroup: 3000 + fsGroup: 2000 + fsGroupChangePolicy: "OnRootMismatch" +``` + +This is an alpha feature. To use it, enable the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `ConfigurableFSGroupPolicy` for the kube-api-server, the kube-controller-manager, and for the kubelet. + +{{< note >}} +This field has no effect on ephemeral volume types such as +[`secret`](https://kubernetes.io/docs/concepts/storage/volumes/#secret), +[`configMap`](https://kubernetes.io/docs/concepts/storage/volumes/#configmap), +and [`emptydir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir). +{{< /note >}} + + ## Set the security context for a Container To specify security settings for a Container, include the `securityContext` field diff --git a/content/en/docs/tasks/debug-application-cluster/debug-application.md b/content/en/docs/tasks/debug-application-cluster/debug-application.md index 053af8b65456e..f63173e334841 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-application.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-application.md @@ -64,38 +64,8 @@ Again, the information from `kubectl describe ...` should be informative. The m #### My pod is crashing or otherwise unhealthy -First, take a look at the logs of -the current container: - -```shell -kubectl logs ${POD_NAME} ${CONTAINER_NAME} -``` - -If your container has previously crashed, you can access the previous container's crash log with: - -```shell -kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME} -``` - -Alternately, you can run commands inside that container with `exec`: - -```shell -kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN} -``` - -{{< note >}} -`-c ${CONTAINER_NAME}` is optional. You can omit it for Pods that only contain a single container. -{{< /note >}} - -As an example, to look at the logs from a running Cassandra pod, you might run - -```shell -kubectl exec cassandra -- cat /var/log/cassandra/system.log -``` - -If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host, -but this should generally not be necessary given tools in the Kubernetes API. Therefore, if you find yourself needing to ssh into a machine, please file a -feature request on GitHub describing your use case and why these tools are insufficient. +Once your pod has been scheduled, the methods described in [Debug Running Pods]( +/docs/tasks/debug-application-cluster/debug-running-pods/) are available for debugging. #### My pod is running but not doing what I told it to do diff --git a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md index 56ba566bc6d22..ec84b82fd02fb 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md @@ -93,40 +93,9 @@ worker node, but it can't run on that machine. Again, the information from ### My pod is crashing or otherwise unhealthy -First, take a look at the logs of the current container: +Once your pod has been scheduled, the methods described in [Debug Running Pods]( +/docs/tasks/debug-application-cluster/debug-running-pods/) are available for debugging. -```shell -kubectl logs ${POD_NAME} ${CONTAINER_NAME} -``` - -If your container has previously crashed, you can access the previous -container's crash log with: - -```shell -kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME} -``` - -Alternately, you can run commands inside that container with `exec`: - -```shell -kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN} -``` - -{{< note >}} -`-c ${CONTAINER_NAME}` is optional. You can omit it for pods that -only contain a single container. -{{< /note >}} - -As an example, to look at the logs from a running Cassandra pod, you might run: - -```shell -kubectl exec cassandra -- cat /var/log/cassandra/system.log -``` - -If your cluster enabled it, you can also try adding an [ephemeral container](/docs/concepts/workloads/pods/ephemeral-containers/) into the existing pod. You can use the new temporary container to run arbitrary commands, for example, to diagnose problems inside the Pod. See the page about [ephemeral container](/docs/concepts/workloads/pods/ephemeral-containers/) for more details, including feature availability. - -If none of these approaches work, you can find the host machine that the pod is -running on and SSH into that host. ## Debugging ReplicationControllers diff --git a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md new file mode 100644 index 0000000000000..95065ca595c6d --- /dev/null +++ b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md @@ -0,0 +1,190 @@ +--- +reviewers: +- verb +- soltysh +title: Debug Running Pods +content_template: templates/task +--- + +{{% capture overview %}} + +This page explains how to debug Pods running (or crashing) on a Node. + +{{% /capture %}} + +{{% capture prerequisites %}} + +* Your {{< glossary_tooltip text="Pod" term_id="pod" >}} should already be + scheduled and running. If your Pod is not yet running, start with [Troubleshoot + Applications](/docs/tasks/debug-application-cluster/debug-application/). +* For some of the advanced debugging steps you need to know on which Node the + Pod is running and have shell access to run commands on that Node. You don't + need that access to run the standard debug steps that use `kubectl`. + +{{% /capture %}} + +{{% capture steps %}} + +## Examining pod logs {#examine-pod-logs} + +First, look at the logs of the affected container: + +```shell +kubectl logs ${POD_NAME} ${CONTAINER_NAME} +``` + +If your container has previously crashed, you can access the previous container's crash log with: + +```shell +kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME} +``` + +## Debugging with container exec {#container-exec} + +If the {{< glossary_tooltip text="container image" term_id="image" >}} includes +debugging utilities, as is the case with images built from Linux and Windows OS +base images, you can run commands inside a specific container with +`kubectl exec`: + +```shell +kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN} +``` + +{{< note >}} +`-c ${CONTAINER_NAME}` is optional. You can omit it for Pods that only contain a single container. +{{< /note >}} + +As an example, to look at the logs from a running Cassandra pod, you might run + +```shell +kubectl exec cassandra -- cat /var/log/cassandra/system.log +``` + +You can run a shell that's connected to your terminal using the `-i` and `-t` +arguments to `kubectl exec`, for example: + +```shell +kubectl exec -it cassandra -- sh +``` + +For more details, see [Get a Shell to a Running Container]( +/docs/tasks/debug-application-cluster/get-shell-running-container/). + +## Debugging with an ephemeral debug container {#ephemeral-container} + +{{< feature-state state="alpha" for_k8s_version="v1.18" >}} + +{{< glossary_tooltip text="Ephemeral containers" term_id="ephemeral-container" >}} +are useful for interactive troubleshooting when `kubectl exec` is insufficient +because a container has crashed or a container image doesn't include debugging +utilities, such as with [distroless images]( +https://github.com/GoogleContainerTools/distroless). `kubectl` has an alpha +command that can create ephemeral containers for debugging beginning with version +`v1.18`. + +### Example debugging using ephemeral containers {#ephemeral-container-example} + +{{< note >}} +The examples in this section require the `EphemeralContainers` [feature gate]( +/docs/reference/command-line-tools-reference/feature-gates/) enabled in your +cluster and `kubectl` version v1.18 or later. +{{< /note >}} + +You can use the `kubectl alpha debug` command to add ephemeral containers to a +running Pod. First, create a pod for the example: + +```shell +kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never +``` + +{{< note >}} +This section use the `pause` container image in examples because it does not +contain userland debugging utilities, but this method works with all container +images. +{{< /note >}} + +If you attempt to use `kubectl exec` to create a shell you will see an error +because there is no shell in this container image. + +```shell +kubectl exec -it pause -- sh +``` + +``` +OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknown +``` + +You can instead add a debugging container using `kubectl alpha debug`. If you +specify the `-i`/`--interactive` argument, `kubectl` will automatically attach +to the console of the Ephemeral Container. + +```shell +kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo +``` + +``` +Defaulting debug container name to debugger-8xzrl. +If you don't see a command prompt, try pressing enter. +/ # +``` + +This command adds a new busybox container and attaches to it. The `--target` +parameter targets the process namespace of another container. It's necessary +here because `kubectl run` does not enable [process namespace sharing]( +/docs/tasks/configure-pod-container/share-process-namespace/) in the pod it +creates. + +{{< note >}} +The `--target` parameter must be supported by the {{< glossary_tooltip +text="Container Runtime" term_id="container-runtime" >}}. When not supported, +the Ephemeral Container may not be started, or it may be started with an +isolated process namespace. +{{< /note >}} + +You can view the state of the newly created ephemeral container using `kubectl describe`: + +```shell +kubectl describe pod ephemeral-demo +``` + +``` +... +Ephemeral Containers: + debugger-8xzrl: + Container ID: docker://b888f9adfd15bd5739fefaa39e1df4dd3c617b9902082b1cfdc29c4028ffb2eb + Image: busybox + Image ID: docker-pullable://busybox@sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084 + Port: + Host Port: + State: Running + Started: Wed, 12 Feb 2020 14:25:42 +0100 + Ready: False + Restart Count: 0 + Environment: + Mounts: +... +``` + +Use `kubectl delete` to remove the Pod when you're finished: + +```shell +kubectl delete pod ephemeral-demo +``` + + + +## Debugging via a shell on the node {#node-shell-session} + +If none of these approaches work, you can find the host machine that the pod is +running on and SSH into that host, but this should generally not be necessary +given tools in the Kubernetes API. Therefore, if you find yourself needing to +ssh into a machine, please file a feature request on GitHub describing your use +case and why these tools are insufficient. + +{{% /capture %}} diff --git a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md index 45ad12506f070..072070ab663cf 100644 --- a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md +++ b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md @@ -56,7 +56,7 @@ as a Deployment object. If you use a different Kubernetes setup mechanism you ca Metric server collects metrics from the Summary API, exposed by [Kubelet](/docs/admin/kubelet/) on each node. -Metrics Server registered in the main API server through +Metrics Server is registered with the main API server through [Kubernetes aggregator](/docs/concepts/api-extension/apiserver-aggregation/). Learn more about the metrics server in [the design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md). diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md index ab98f2ea4e55b..8eec3c17818cc 100644 --- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md @@ -57,7 +57,7 @@ If you haven't created the DaemonSet in the system, check your DaemonSet manifest with the following command instead: ```shell -kubectl apply -f ds.yaml --dry-run -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' +kubectl apply -f ds.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' ``` The output from both commands should be: diff --git a/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md b/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md index 890170a98869a..ad6b969c87ba5 100644 --- a/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md +++ b/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md @@ -17,11 +17,11 @@ can consume huge pages and the current limitations. {{% capture prerequisites %}} 1. Kubernetes nodes must pre-allocate huge pages in order for the node to report - its huge page capacity. A node may only pre-allocate huge pages for a single - size. + its huge page capacity. A node can pre-allocate huge pages for multiple + sizes. -The nodes will automatically discover and report all huge page resources as a -schedulable resource. +The nodes will automatically discover and report all huge page resources as +schedulable resources. {{% /capture %}} @@ -30,12 +30,51 @@ schedulable resource. ## API Huge pages can be consumed via container level resource requirements using the -resource name `hugepages-`, where size is the most compact binary notation -using integer values supported on a particular node. For example, if a node -supports 2048KiB page sizes, it will expose a schedulable resource -`hugepages-2Mi`. Unlike CPU or memory, huge pages do not support overcommit. Note -that when requesting hugepage resources, either memory or CPU resources must -be requested as well. +resource name `hugepages-`, where `` is the most compact binary +notation using integer values supported on a particular node. For example, if a +node supports 2048KiB and 1048576KiB page sizes, it will expose a schedulable +resources `hugepages-2Mi` and `hugepages-1Gi`. Unlike CPU or memory, huge pages +do not support overcommit. Note that when requesting hugepage resources, either +memory or CPU resources must be requested as well. + +A pod may consume multiple huge page sizes in a single pod spec. In this case it +must use `medium: HugePages-` notation for all volume mounts. + + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: huge-pages-example +spec: + containers: + - name: example + image: fedora:latest + command: + - sleep + - inf + volumeMounts: + - mountPath: /hugepages-2Mi + name: hugepage-2mi + - mountPath: /hugepages-1Gi + name: hugepage-1gi + resources: + limits: + hugepages-2Mi: 100Mi + hugepages-1Gi: 2Gi + memory: 100Mi + requests: + memory: 100Mi + volumes: + - name: hugepage-2mi + emptyDir: + medium: HugePages-2Mi + - name: hugepage-1gi + emptyDir: + medium: HugePages-1Gi +``` + +A pod may use `medium: HugePages` only if it requests huge pages of one size. ```yaml apiVersion: v1 @@ -66,8 +105,7 @@ spec: - Huge page requests must equal the limits. This is the default if limits are specified, but requests are not. -- Huge pages are isolated at a pod scope, container isolation is planned in a - future iteration. +- Huge pages are isolated at a container scope, so each container has own limit on their cgroup sandbox as requested in a container spec. - EmptyDir volumes backed by huge pages may not consume more huge page memory than the pod request. - Applications that consume huge pages via `shmget()` with `SHM_HUGETLB` must @@ -75,10 +113,15 @@ spec: - Huge page usage in a namespace is controllable via ResourceQuota similar to other compute resources like `cpu` or `memory` using the `hugepages-` token. +- Support of multiple sizes huge pages is feature gated. It can be + enabled with the `HugePageStorageMediumSize` [feature +gate](/docs/reference/command-line-tools-reference/feature-gates/) on the {{< +glossary_tooltip text="kubelet" term_id="kubelet" >}} and {{< +glossary_tooltip text="kube-apiserver" +term_id="kube-apiserver" >}} (`--feature-gates=HugePageStorageMediumSize=true`). ## Future -- Support container isolation of huge pages in addition to pod isolation. - NUMA locality guarantees as a feature of quality of service. - LimitRange support. diff --git a/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md b/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md index 835bf75faf5a4..6b1357a133bb2 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/imperative-command.md @@ -139,10 +139,10 @@ creation. This is done by piping the output of the `create` command to the `set` command, and then back to the `create` command. Here's an example: ```sh -kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f - +kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f - ``` -1. The `kubectl create service -o yaml --dry-run` command creates the configuration for the Service, but prints it to stdout as YAML instead of sending it to the Kubernetes API server. +1. The `kubectl create service -o yaml --dry-run=client` command creates the configuration for the Service, but prints it to stdout as YAML instead of sending it to the Kubernetes API server. 1. The `kubectl set selector --local -f - -o yaml` command reads the configuration from stdin, and writes the updated configuration to stdout as YAML. 1. The `kubectl create -f -` command creates the object using the configuration provided via stdin. @@ -152,7 +152,7 @@ You can use `kubectl create --edit` to make arbitrary changes to an object before it is created. Here's an example: ```sh -kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run > /tmp/srv.yaml +kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run=client > /tmp/srv.yaml kubectl create --edit -f /tmp/srv.yaml ``` diff --git a/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md b/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md index f9a6ed4b18fd5..61230513a4c78 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/kustomization.md @@ -791,6 +791,12 @@ kubectl get -k ./ kubectl describe -k ./ ``` +Run the following command to compare the Deployment object `dev-my-nginx` against the state that the cluster would be in if the manifest was applied: + +```shell +kubectl diff -k ./ +``` + Run the following command to delete the Deployment object `dev-my-nginx`: ```shell diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index 3ed6fa0e41835..b5f7612d75439 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -199,13 +199,12 @@ The detailed documentation of `kubectl autoscale` can be found [here](/docs/refe ## Autoscaling during rolling update -Currently in Kubernetes, it is possible to perform a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) by managing replication controllers directly, -or by using the deployment object, which manages the underlying replica sets for you. +Currently in Kubernetes, it is possible to perform a rolling update by using the deployment object, which manages the underlying replica sets for you. Horizontal Pod Autoscaler only supports the latter approach: the Horizontal Pod Autoscaler is bound to the deployment object, it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replica sets. Horizontal Pod Autoscaler does not work with rolling update using direct manipulation of replication controllers, -i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`). +i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update. The reason this doesn't work is that when rolling update creates a new replication controller, the Horizontal Pod Autoscaler will not be bound to the new replication controller. @@ -284,6 +283,154 @@ and [external.metrics.k8s.io](https://github.com/kubernetes/community/blob/maste For examples of how to use them see [the walkthrough for using custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics) and [the walkthrough for using external metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects). +## Support for configurable scaling behavior + +Starting from +[v1.18](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md) +the `v2beta2` API allows scaling behavior to be configured through the HPA +`behavior` field. Behaviors are specified separately for scaling up and down in +`scaleUp` or `scaleDown` section under the `behavior` field. A stabilization +window can be specified for both directions which prevents the flapping of the +number of the replicas in the scaling target. Similarly specifing scaling +policies controls the rate of change of replicas while scaling. + +### Scaling Policies + +One or more scaling policies can be specified in the `behavior` section of the spec. +When multiple policies are specified the policy which allows the highest amount of +change is the policy which is selected by default. The following example shows this behavior +while scaling down: + +```yaml +behavior: + scaleDown: + policies: + - type: Pods + value: 4 + periodSeconds: 60 + - type: Percent + value: 10 + periodSeconds: 60 +``` + +When the number of pods is more than 40 the second policy will be used for scaling down. +For instance if there are 80 replicas and the target has to be scaled down to 10 replicas +then during the first step 8 replicas will be reduced. In the next iteration when the number +of replicas is 72, 10% of the pods is 7.2 but the number is rounded up to 8. On each loop of +the autoscaler controller the number of pods to be change is re-calculated based on the number +of current replicas. When the number of replicas falls below 40 the first policy_(Pods)_ is applied +and 4 replicas will be reduced at a time. + +`periodSeconds` indicates the length of time in the past for which the policy must hold true. +The first policy allows at most 4 replicas to be scaled down in one minute. The second policy +allows at most 10% of the current replicas to be scaled down in one minute. + +The policy selection can be changed by specifying the `selectPolicy` field for a scaling +direction. By setting the value to `Min` which would select the policy which allows the +smallest change in the replica count. Setting the value to `Disabled` completely disabled +scaling in that direction. + +### Stabilization Window + +The stabilization window is used to retrict the flapping of replicas when the metrics +used for scaling keep fluctuating. The stabilization window is used by the autoscaling +algorithm to consider the computed desired state from the past to prevent scaling. In +the following example the stabilization window is specified for `scaleDown`. + +```yaml +scaleDown: + stabilizationWindowSeconds: 300 +``` + +When the metrics indicate that the target should be scaled down the algorithm looks +into previously computed desired states and uses the highest value from the specified +interval. In above example all desired states from the past 5 minutes will be considered. + +### Default Behavior + +To use the custom scaling not all fields have to be specified. Only values which need to be +customized can be specified. These custom values are merged with default values. The default values +match the existing behavior in the HPA algorithm. + +```yaml +behavior: + scaleDown: + stabilizationWindowSeconds: 300 + policies: + - type: Percent + value: 100 + periodSeconds: 15 + scaleUp: + stabilizationWindowSeconds: 0 + policies: + - type: Percent + value: 100 + periodSeconds: 15 + - type: Pods + value: 4 + periodSeconds: 15 + selectPolicy: Max +``` +For scaling down the stabilization window is _300_ seconds(or the value of the +`--horizontal-pod-autoscaler-downscale-stabilization` flag if provided). There is only a single policy +for scaling down which allows a 100% of the currently running replicas to be removed which +means the scaling target can be scaled down to the minimum allowed replicas. +For scaling up there is no stabilization window. When the metrics indicate that the target should be +scaled up the target is scaled up immediately. There are 2 policies which. 4 pods or a 100% of the currently +running replicas will be added every 15 seconds till the HPA reaches its steady state. + +### Example: change downscale stabilization window + +To provide a custom downscale stabilization window of 1 minute, the following +behavior would be added to the HPA: + +```yaml +behavior: + scaleDown: + stabilizationWindowSeconds: 60 +``` + +### Example: limit scale down rate + +To limit the rate at which pods are removed by the HPA to 10% per minute, the +following behavior would be added to the HPA: + +```yaml +behavior: + scaleDown: + policies: + - type: Percent + value: 10 + periodSeconds: 60 +``` + +To allow a final drop of 5 pods, another policy can be added and a selection +strategy of minimum: + +```yaml +behavior: + scaleDown: + policies: + - type: Percent + value: 10 + periodSeconds: 60 + - type: Pods + value: 5 + periodSeconds: 60 + selectPolicy: Max +``` + +### Example: disable scale down + +The `selectPolicy` value of `Disabled` turns off scaling the given direction. +So to prevent downscaling the following policy would be used: + +```yaml +behavior: + scaleDown: + selectPolicy: Disabled +``` + {{% /capture %}} {{% capture whatsnext %}} diff --git a/content/en/docs/tasks/run-application/rolling-update-replication-controller.md b/content/en/docs/tasks/run-application/rolling-update-replication-controller.md deleted file mode 100644 index 11df107b241d2..0000000000000 --- a/content/en/docs/tasks/run-application/rolling-update-replication-controller.md +++ /dev/null @@ -1,251 +0,0 @@ ---- -reviewers: -- janetkuo -title: Perform Rolling Update Using a Replication Controller -content_template: templates/concept -weight: 80 ---- - -{{% capture overview %}} - -{{< note >}} -The preferred way to create a replicated application is to use a -[Deployment](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#deployment-v1-apps), -which in turn uses a -[ReplicaSet](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#replicaset-v1-apps). -For more information, see -[Running a Stateless Application Using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/). -{{< /note >}} - -To update a service without an outage, `kubectl` supports what is called [rolling update](/docs/reference/generated/kubectl/kubectl-commands/#rolling-update), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](https://git.k8s.io/community/contributors/design-proposals/cli/simple-rolling-update.md) for more information. - -Note that `kubectl rolling-update` only supports Replication Controllers. However, if you deploy applications with Replication Controllers, -consider switching them to [Deployments](/docs/concepts/workloads/controllers/deployment/). A Deployment is a higher-level controller that automates rolling updates -of applications declaratively, and therefore is recommended. If you still want to keep your Replication Controllers and use `kubectl rolling-update`, keep reading: - -A rolling update applies changes to the configuration of pods being managed by -a replication controller. The changes can be passed as a new replication -controller configuration file; or, if only updating the image, a new container -image can be specified directly. - -A rolling update works by: - -1. Creating a new replication controller with the updated configuration. -2. Increasing/decreasing the replica count on the new and old controllers until - the correct number of replicas is reached. -3. Deleting the original replication controller. - -Rolling updates are initiated with the `kubectl rolling-update` command: - -```shell -kubectl rolling-update NAME NEW_NAME --image=IMAGE:TAG - -# or read the configuration from a file -kubectl rolling-update NAME -f FILE -``` - -{{% /capture %}} - - -{{% capture body %}} - -## Passing a configuration file - -To initiate a rolling update using a configuration file, pass the new file to -`kubectl rolling-update`: - -```shell -kubectl rolling-update NAME -f FILE -``` - -The configuration file must: - -* Specify a different `metadata.name` value. - -* Overwrite at least one common label in its `spec.selector` field. - -* Use the same `metadata.namespace`. - -Replication controller configuration files are described in -[Creating Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/). - -### Examples - -```shell -# Update pods of frontend-v1 using new replication controller data in frontend-v2.json. -kubectl rolling-update frontend-v1 -f frontend-v2.json - -# Update pods of frontend-v1 using JSON data passed into stdin. -cat frontend-v2.json | kubectl rolling-update frontend-v1 -f - -``` - -## Updating the container image - -To update only the container image, pass a new image name and tag with the -`--image` flag and (optionally) a new controller name: - -```shell -kubectl rolling-update NAME NEW_NAME --image=IMAGE:TAG -``` - -The `--image` flag is only supported for single-container pods. Specifying -`--image` with multi-container pods returns an error. - -If you didn't specify a new name, this creates a new replication controller -with a temporary name. Once the rollout is complete, the old controller is -deleted, and the new controller is updated to use the original name. - -The update will fail if `IMAGE:TAG` is identical to the -current value. For this reason, we recommend the use of versioned tags as -opposed to values such as `:latest`. Doing a rolling update from `image:latest` -to a new `image:latest` will fail, even if the image at that tag has changed. -Moreover, the use of `:latest` is not recommended, see -[Best Practices for Configuration](/docs/concepts/configuration/overview/#container-images) for more information. - -### Examples - -```shell -# Update the pods of frontend-v1 to frontend-v2 -kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 - -# Update the pods of frontend, keeping the replication controller name -kubectl rolling-update frontend --image=image:v2 -``` - -## Required and optional fields - -Required fields are: - -* `NAME`: The name of the replication controller to update. - -as well as either: - -* `-f FILE`: A replication controller configuration file, in either JSON or - YAML format. The configuration file must specify a new top-level `id` value - and include at least one of the existing `spec.selector` key:value pairs. - See the - [Run Stateless AP Replication Controller](/docs/tutorials/stateless-application/run-stateless-ap-replication-controller/#replication-controller-configuration-file) - page for details. -
-
- or: -
-
-* `--image IMAGE:TAG`: The name and tag of the image to update to. Must be - different than the current image:tag currently specified. - -Optional fields are: - -* `NEW_NAME`: Only used in conjunction with `--image` (not with `-f FILE`). The - name to assign to the new replication controller. -* `--poll-interval DURATION`: The time between polling the controller status - after update. Valid units are `ns` (nanoseconds), `us` or `µs` (microseconds), - `ms` (milliseconds), `s` (seconds), `m` (minutes), or `h` (hours). Units can - be combined (e.g. `1m30s`). The default is `3s`. -* `--timeout DURATION`: The maximum time to wait for the controller to update a - pod before exiting. Default is `5m0s`. Valid units are as described for - `--poll-interval` above. -* `--update-period DURATION`: The time to wait between updating pods. Default - is `1m0s`. Valid units are as described for `--poll-interval` above. - -Additional information about the `kubectl rolling-update` command is available -from the [`kubectl` reference](/docs/reference/generated/kubectl/kubectl-commands/#rolling-update). - -## Walkthrough - -Let's say you were running version 1.14.2 of nginx: - -{{< codenew file="controllers/replication-nginx-1.14.2.yaml" >}} - -To update to version 1.16.1, you can use [`kubectl rolling-update --image`](https://git.k8s.io/community/contributors/design-proposals/cli/simple-rolling-update.md) to specify the new image: - -```shell -kubectl rolling-update my-nginx --image=nginx:1.16.1 -``` -``` -Created my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 -``` - -In another window, you can see that `kubectl` added a `deployment` label to the pods, whose value is a hash of the configuration, to distinguish the new pods from the old: - -```shell -kubectl get pods -l app=nginx -L deployment -``` -``` -NAME READY STATUS RESTARTS AGE DEPLOYMENT -my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-k156z 1/1 Running 0 1m ccba8fbd8cc8160970f63f9a2696fc46 -my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-v95yh 1/1 Running 0 35s ccba8fbd8cc8160970f63f9a2696fc46 -my-nginx-divi2 1/1 Running 0 2h 2d1d7a8f682934a254002b56404b813e -my-nginx-o0ef1 1/1 Running 0 2h 2d1d7a8f682934a254002b56404b813e -my-nginx-q6all 1/1 Running 0 8m 2d1d7a8f682934a254002b56404b813e -``` - -`kubectl rolling-update` reports progress as it progresses: - -``` -Scaling up my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 from 0 to 3, scaling down my-nginx from 3 to 0 (keep 3 pods available, don't exceed 4 pods) -Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 1 -Scaling my-nginx down to 2 -Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 2 -Scaling my-nginx down to 1 -Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 3 -Scaling my-nginx down to 0 -Update succeeded. Deleting old controller: my-nginx -Renaming my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 to my-nginx -replicationcontroller "my-nginx" rolling updated -``` - -If you encounter a problem, you can stop the rolling update midway and revert to the previous version using `--rollback`: - -```shell -kubectl rolling-update my-nginx --rollback -``` -``` -Setting "my-nginx" replicas to 1 -Continuing update with existing controller my-nginx. -Scaling up nginx from 1 to 1, scaling down my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 from 1 to 0 (keep 1 pods available, don't exceed 2 pods) -Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 down to 0 -Update succeeded. Deleting my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 -replicationcontroller "my-nginx" rolling updated -``` - -This is one example where the immutability of containers is a huge asset. - -If you need to update more than just the image (e.g., command arguments, environment variables), you can create a new replication controller, with a new name and distinguishing label value, such as: - -{{< codenew file="controllers/replication-nginx-1.16.1.yaml" >}} - -and roll it out: - -```shell -# Assuming you named the file "my-nginx.yaml" -kubectl rolling-update my-nginx -f ./my-nginx.yaml -``` -``` -Created my-nginx-v4 -Scaling up my-nginx-v4 from 0 to 5, scaling down my-nginx from 4 to 0 (keep 4 pods available, don't exceed 5 pods) -Scaling my-nginx-v4 up to 1 -Scaling my-nginx down to 3 -Scaling my-nginx-v4 up to 2 -Scaling my-nginx down to 2 -Scaling my-nginx-v4 up to 3 -Scaling my-nginx down to 1 -Scaling my-nginx-v4 up to 4 -Scaling my-nginx down to 0 -Scaling my-nginx-v4 up to 5 -Update succeeded. Deleting old controller: my-nginx -replicationcontroller "my-nginx-v4" rolling updated -``` - -## Troubleshooting - -If the `timeout` duration is reached during a rolling update, the operation will -fail with some pods belonging to the new replication controller, and some to the -original controller. - -To continue the update from where it failed, retry using the same command. - -To roll back to the original state before the attempted update, append the -`--rollback=true` flag to the original command. This will revert all changes. - -{{% /capture %}} diff --git a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md index 2fbed683c5022..73268ff71417f 100644 --- a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md +++ b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md @@ -20,7 +20,7 @@ Use [Helm](https://helm.sh/) to install Service Catalog on your Kubernetes clust * If you are using a cloud-based Kubernetes cluster or {{< glossary_tooltip text="Minikube" term_id="minikube" >}}, you may already have cluster DNS enabled. * If you are using `hack/local-up-cluster.sh`, ensure that the `KUBE_ENABLE_CLUSTER_DNS` environment variable is set, then run the install script. * [Install and setup kubectl](/docs/tasks/tools/install-kubectl/) v1.7 or higher. Make sure it is configured to connect to the Kubernetes cluster. -* Install [Helm](http://helm.sh/) v2.7.0 or newer. +* Install [Helm](https://helm.sh/) v2.7.0 or newer. * Follow the [Helm install instructions](https://helm.sh/docs/intro/install/). * If you already have an appropriate version of Helm installed, execute `helm init` to install Tiller, the server-side component of Helm. diff --git a/content/en/docs/tasks/tools/install-minikube.md b/content/en/docs/tasks/tools/install-minikube.md index b106c23ef6081..50e4436dec88a 100644 --- a/content/en/docs/tasks/tools/install-minikube.md +++ b/content/en/docs/tasks/tools/install-minikube.md @@ -26,7 +26,7 @@ grep -E --color 'vmx|svm' /proc/cpuinfo {{% tab name="macOS" %}} To check if virtualization is supported on macOS, run the following command on your terminal. ``` -sysctl -a | grep -E --color 'machdep.cpu.features|VMX' +sysctl -a | grep -E --color 'machdep.cpu.features|VMX' ``` If you see `VMX` in the output (should be colored), the VT-x feature is enabled in your machine. {{% /tab %}} @@ -74,7 +74,7 @@ If you do not already have a hypervisor installed, install one of these now: • [VirtualBox](https://www.virtualbox.org/wiki/Downloads) -Minikube also supports a `--vm-driver=none` option that runs the Kubernetes components on the host and not in a VM. +Minikube also supports a `--driver=none` option that runs the Kubernetes components on the host and not in a VM. Using this driver requires [Docker](https://www.docker.com/products/docker-desktop) and a Linux environment but not a hypervisor. If you're using the `none` driver in Debian or a derivative, use the `.deb` packages for @@ -83,7 +83,7 @@ You can download `.deb` packages from [Docker](https://www.docker.com/products/d {{< caution >}} The `none` VM driver can result in security and data loss issues. -Before using `--vm-driver=none`, consult [this documentation](https://minikube.sigs.k8s.io/docs/reference/drivers/none/) for more information. +Before using `--driver=none`, consult [this documentation](https://minikube.sigs.k8s.io/docs/reference/drivers/none/) for more information. {{< /caution >}} Minikube also supports a `vm-driver=podman` similar to the Docker driver. Podman run as superuser privilege (root user) is the best way to ensure that your containers have full access to any feature available on your system. @@ -214,12 +214,12 @@ To confirm successful installation of both a hypervisor and Minikube, you can ru {{< note >}} -For setting the `--vm-driver` with `minikube start`, enter the name of the hypervisor you installed in lowercase letters where `` is mentioned below. A full list of `--vm-driver` values is available in [specifying the VM driver documentation](https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver). +For setting the `--driver` with `minikube start`, enter the name of the hypervisor you installed in lowercase letters where `` is mentioned below. A full list of `--driver` values is available in [specifying the VM driver documentation](https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver). {{< /note >}} ```shell -minikube start --vm-driver= +minikube start --driver= ``` Once `minikube start` finishes, run the command below to check the status of the cluster: diff --git a/content/en/examples/admin/resource/quota-mem-cpu-pod-2.yaml b/content/en/examples/admin/resource/quota-mem-cpu-pod-2.yaml index 22726c600aaf3..380e900fda52f 100644 --- a/content/en/examples/admin/resource/quota-mem-cpu-pod-2.yaml +++ b/content/en/examples/admin/resource/quota-mem-cpu-pod-2.yaml @@ -9,8 +9,7 @@ spec: resources: limits: memory: "1Gi" - cpu: "800m" + cpu: "800m" requests: memory: "700Mi" cpu: "400m" - diff --git a/content/en/examples/admin/resource/quota-mem-cpu-pod.yaml b/content/en/examples/admin/resource/quota-mem-cpu-pod.yaml index ba27bf5ccfc78..b0fd0a9451bf2 100644 --- a/content/en/examples/admin/resource/quota-mem-cpu-pod.yaml +++ b/content/en/examples/admin/resource/quota-mem-cpu-pod.yaml @@ -9,8 +9,7 @@ spec: resources: limits: memory: "800Mi" - cpu: "800m" + cpu: "800m" requests: memory: "600Mi" cpu: "400m" - diff --git a/content/en/examples/pods/pod-nginx-preferred-affinity.yaml b/content/en/examples/pods/pod-nginx-preferred-affinity.yaml new file mode 100644 index 0000000000000..183ba9f014225 --- /dev/null +++ b/content/en/examples/pods/pod-nginx-preferred-affinity.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + affinity: + nodeAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + preference: + matchExpressions: + - key: disktype + operator: In + values: + - ssd + containers: + - name: nginx + image: nginx + imagePullPolicy: IfNotPresent diff --git a/content/en/examples/pods/pod-nginx-required-affinity.yaml b/content/en/examples/pods/pod-nginx-required-affinity.yaml new file mode 100644 index 0000000000000..a3805eaa8d9c9 --- /dev/null +++ b/content/en/examples/pods/pod-nginx-required-affinity.yaml @@ -0,0 +1,18 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: disktype + operator: In + values: + - ssd + containers: + - name: nginx + image: nginx + imagePullPolicy: IfNotPresent diff --git a/content/en/training/_index.html b/content/en/training/_index.html index aad96f49123ec..53922a9879030 100644 --- a/content/en/training/_index.html +++ b/content/en/training/_index.html @@ -88,7 +88,7 @@

The Certified Kubernetes Administrator (CKA) program provides assurance that CKAs have the skills, knowledge, and competency to perform the responsibilities of Kubernetes administrators.


- Go to Certification + Go to Certification diff --git a/content/id/docs/concepts/configuration/secret.md b/content/id/docs/concepts/configuration/secret.md new file mode 100644 index 0000000000000..1cb0622197958 --- /dev/null +++ b/content/id/docs/concepts/configuration/secret.md @@ -0,0 +1,1060 @@ +--- +title: Secret +content_template: templates/concept +feature: + title: Secret dan manajemen konfigurasi + description: > + Menerapkan serta mengubah secret serta konfigurasi aplikasi tanpa melakukan perubahan pada image kamu serta mencegah tereksposnya secret yang kamu miliki pada konfigurasi. +weight: 50 +--- + + +{{% capture overview %}} + +Objek `secret` pada Kubernetes mengizinkan kamu menyimpan dan mengatur informasi yang sifatnya sensitif, seperti +_password_, token OAuth, dan ssh _keys_. Menyimpan informasi yang sifatnya sensitif ini ke dalam `secret` +cenderung lebih aman dan fleksible jika dibandingkan dengan menyimpan informasi tersebut secara apa adanya pada definisi {{< glossary_tooltip term_id="pod" >}} atau di dalam {{< glossary_tooltip text="container image" term_id="image" >}}. +Silahkan lihat [Dokumen desain Secret](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) untuk informasi yang sifatnya mendetail. + +{{% /capture %}} + +{{% capture body %}} + +## Ikhtisar Secret + +Sebuah Secret merupakan sebuah objek yang mengandung informasi yang sifatnya +sensitif, seperti _password_, token, atau _key_. Informasi tersebut sebenarnya bisa saja +disimpan di dalam spesifikasi Pod atau _image_; meskipun demikian, melakukan penyimpanan +di dalam objek Secret mengizinkan pengguna untuk memiliki kontrol lebih lanjut mengenai +bagaimana Secret ini disimpan, serta mencegah tereksposnya informasi sensitif secara +tidak disengaja. + +Baik pengguna dan sistem memiliki kemampuan untuk membuat objek Secret. + +Untuk menggunakan Secret, sebuah Pod haruslah merujuk pada Secret tersebut. +Sebuah Secret dapat digunakan di dalam sebuah Pod melalui dua cara: +sebagai _file_ yang ada di dalam _volume_ {{< glossary_tooltip text="volume" term_id="volume" >}} +yang di-_mount_ pada salah satu container Pod, atau digunakan oleh kubelet +ketika menarik _image_ yang digunakan di dalam Pod. + +### Secret _Built-in_ + +#### Sebuah _Service Account_ akan Secara Otomatis Dibuat dan Meng-_attach_ Secret dengan Kredensial API + +Kubernetes secara otomatis membuat secret yang mengandung kredensial +yang digunakan untuk mengakses API, serta secara otomatis memerintahkan Pod untuk menggunakan +Secret ini. + +Mekanisme otomatisasi pembuatan secret dan penggunaan kredensial API dapat di nonaktifkan +atau di-_override_ jika kamu menginginkannya. Meskipun begitu, jika apa yang kamu butuhkan +hanyalah mengakses apiserver secara aman, maka mekanisme _default_ inilah yang disarankan. + +Baca lebih lanjut dokumentasi [_Service Account_](/docs/tasks/configure-pod-container/configure-service-account/) +untuk informasi lebih lanjut mengenai bagaimana cara kerja _Service Account_. + +### Membuat Objek Secret Kamu Sendiri + +#### Membuat Secret dengan Menggunakan kubectl + +Misalnya saja, beberapa Pod memerlukan akses ke sebuah basis data. Kemudian _username_ +dan _password_ yang harus digunakan oleh Pod-Pod tersebut berada pada mesin lokal kamu +dalam bentuk _file-file_ `./username.txt` dan `./password.txt`. + +```shell +# Buatlah file yang selanjutnya akan digunakan pada contoh-contoh selanjutnya +echo -n 'admin' > ./username.txt +echo -n '1f2d1e2e67df' > ./password.txt +``` + +Perintah `kubectl create secret` akan mengemas _file-file_ ini menjadi Secret dan +membuat sebuah objek pada Apiserver. + +```shell +kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt +``` +``` +secret "db-user-pass" created +``` +{{< note >}} +Karakter spesial seperti `$`, `\*`, and `!` membutuhkan mekanisme _escaping_. +Jika _password_ yang kamu gunakan mengandung karakter spesial, kamu perlu melakukan _escape_ karakter dengan menggunakan karakter `\\`. Contohnya, apabila _password_ yang kamu miliki adalah `S!B\*d$zDsb`, maka kamu harus memanggil perintah kubectl dengan cara berikut: + kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password=S\\!B\\\\*d\\$zDsb + Perhatikan bahwa kamu tidak perlu melakukan _escape_ karakter apabila massukan yang kamu berikan merupakan _file_ (`--from-file`). +{{< /note >}} + +Kamu dapat memastikan apakah suatu Secret sudah dibuat atau belum dengan menggunakan perintah: + +```shell +kubectl get secrets +``` +``` +NAME TYPE DATA AGE +db-user-pass Opaque 2 51s +``` +```shell +kubectl describe secrets/db-user-pass +``` +``` +Name: db-user-pass +Namespace: default +Labels: +Annotations: + +Type: Opaque + +Data +==== +password.txt: 12 bytes +username.txt: 5 bytes +``` + +{{< note >}} +Perintah-perintah `kubectl get` dan `kubectl describe` secara _default_ akan +mencegah ditampilkannya informasi yang ada di dalam Secret. +Hal ini dilakukan untuk melindungi agar Secret tidak terekspos secara tidak disengaja oleh orang lain, +atau tersimpan di dalam _log_ _terminal_. +{{< /note >}} + +Kamu dapat membaca [bagaimana cara melakukan _decode_ sebuah secret](#decoding-a-secret) +untuk mengetahui bagaimana cara melihat isi dari Secret. + +#### Membuat Secret Secara Manual + +Kamu dapat membuat sebuah Secret dengan terlebih dahulu membuat _file_ yang berisikan +informasi yang ingin kamu jadikan Secret dalam bentuk yaml atau json dan kemudian membuat objek +dengan menggunakan _file_ tersebut. [Secret](/docs/reference/generated/kubernetes-api/v1.12/#secret-v1-core) +mengandung dua buah _map_: _data_ dan _stringData_. _Field_ _data_ digunakan untuk menyimpan sembarang data, +yang di-_encode_ menggunakan base64. Sementara itu _stringData_ disediakan untuk memudahkan kamu untuk menyimpan +informasi sensitif dalam format yang tidak di-_encode_. + +Sebagai contoh, untuk menyimpan dua buah string di dalam Secret dengan menggunakan _field_ data, ubahlah +informasi tersebut ke dalam base64 dengan menggunakan mekanisme sebagai berikut: + +```shell +echo -n 'admin' | base64 +YWRtaW4= +echo -n '1f2d1e2e67df' | base64 +MWYyZDFlMmU2N2Rm +``` + +Buatlah sebuah Secret yang memiliki bentuk sebagai berikut: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm +``` + +Kemudian buatlah Secret menggunakan perintah [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply): + +```shell +kubectl apply -f ./secret.yaml +``` +``` +secret "mysecret" created +``` + +Untuk beberapa skenario, kamu bisa saja ingin menggunakan opsi _field_ stringData. +_Field_ ini mengizinkan kamu untuk memberikan masukan berupa informasi yang belum di-_encode_ secara langsung +pada sebuah Secret, informasi dalam bentuk string ini kemudian akan di-_encode_ ketika Secret dibuat maupun diubah. + +Contoh praktikal dari hal ini adalah ketika kamu melakukan proses _deploy_ aplikasi +yang menggunakan Secret sebagai penyimpanan _file_ konfigurasi, dan kamu ingin mengisi +bagian dari konfigurasi _file_ tersebut ketika aplikasi di_deploy_. + +Jika kamu ingin aplikasi kamu menggunakan _file_ konfigurasi berikut: + +```yaml +apiUrl: "https://my.api.com/api/v1" +username: "user" +password: "password" +``` + +Kamu dapat menyimpan Secret ini dengan menggunakan cara berikut: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +stringData: + config.yaml: |- + apiUrl: "https://my.api.com/api/v1" + username: {{username}} + password: {{password}} +``` + +Alat _deployment_ yang kamu gunakan kemudian akan mengubah templat variabel `{{username}}` dan `{{password}}` +sebelum menjalankan perintah `kubectl apply`. + +stringData merupakan _field_ yang sifatnya _write-only_ untuk alasan kenyamanan pengguna. +_Field_ ini tidak pernah ditampilkan ketika Secret dibaca. Sebagai contoh, misalkan saja kamu menjalankan +perintah sebagai berikut: + +```shell +kubectl get secret mysecret -o yaml +``` + +Keluaran yang diberikan kurang lebih akan ditampilkan sebagai berikut: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + creationTimestamp: 2018-11-15T20:40:59Z + name: mysecret + namespace: default + resourceVersion: "7225" + selfLink: /api/v1/namespaces/default/secrets/mysecret + uid: c280ad2e-e916-11e8-98f2-025000000001 +type: Opaque +data: + config.yaml: YXBpVXJsOiAiaHR0cHM6Ly9teS5hcGkuY29tL2FwaS92MSIKdXNlcm5hbWU6IHt7dXNlcm5hbWV9fQpwYXNzd29yZDoge3twYXNzd29yZH19 +``` + +Jika sebuah _field_ dispesifikasikan dalam bentuk data maupun stringData, +maka nilai dari stringData-lah yang akan digunakan. Sebagai contoh, misalkan saja terdapat +definisi Secret sebagai berikut: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: mysecret +type: Opaque +data: + username: YWRtaW4= +stringData: + username: administrator +``` + +Akan menghasilkan Secret sebagai berikut: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + creationTimestamp: 2018-11-15T20:46:46Z + name: mysecret + namespace: default + resourceVersion: "7579" + selfLink: /api/v1/namespaces/default/secrets/mysecret + uid: 91460ecb-e917-11e8-98f2-025000000001 +type: Opaque +data: + username: YWRtaW5pc3RyYXRvcg== +``` + +Dimana string `YWRtaW5pc3RyYXRvcg==` akan di-_decode_ sebagai `administrator`. + +_Key_ dari data dan stringData yang boleh tersusun atas karakter alfanumerik, +'-', '_' atau '.'. + +**Catatan _Encoding_:** _Value_ dari JSON dan YAML yang sudah diseriakisasi dari data Secret +akan di-_encode_ ke dalam string base64. _Newline_ dianggap tidak valid pada string ini dan harus +dihilangkan. Ketika pengguna Darwin/macOS menggunakan alat `base64`, maka pengguna +tersebut harus menghindari opsi `-b` yang digunakan untuk memecah baris yang terlalu panjang. +Sebaliknya pengguna Linux _harus_ menambahkan opsi `-w 0` pada perintah `base64` atau +melakukan mekanisme _pipeline_ `base64 | tr -d '\n'` jika tidak terdapat opsi `-w`. + +#### Membuat Secret dengan Menggunakan _Generator_ +Kubectl mendukung [mekanisme manajemen objek dengan menggunakan Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/) +sejak versi 1.14. Dengan fitur baru ini, kamu juga dapat membuat sebuah Secret dari sebuah _generator_ +dan kemudian mengaplikasikannya untuk membuat sebuah objek pada Apiserver. _Generator_ yang digunakan haruslah +dispesifikasikan di dalam sebuah _file_ `kustomization.yaml` di dalam sebuah direktori. + +Sebagai contoh, untuk menghasilan sebuah Secret dari _file-file_ `./username.txt` dan `./password.txt` +```shell +# Membuat sebuah file kustomization.yaml dengan SecretGenerator +cat <./kustomization.yaml +secretGenerator: +- name: db-user-pass + files: + - username.txt + - password.txt +EOF +``` +Gunakan direktori _kustomization_ untuk membuat objek Secret yang diinginkan. +```shell +$ kubectl apply -k . +secret/db-user-pass-96mffmfh4k created +``` + +Kamu dapat memastikan Secret tersebut sudah dibuat dengan menggunakan perintah berikut: + +```shell +$ kubectl get secrets +NAME TYPE DATA AGE +db-user-pass-96mffmfh4k Opaque 2 51s + +$ kubectl describe secrets/db-user-pass-96mffmfh4k +Name: db-user-pass +Namespace: default +Labels: +Annotations: + +Type: Opaque + +Data +==== +password.txt: 12 bytes +username.txt: 5 bytes +``` + +Sebagai contoh, untuk membuat sebuah Secret dari literal `username=admin` dan `password=secret`, +kamu dapat menspesifikasikan _generator_ Secret pada _file_ `kustomization.yaml` sebagai +```shell +# Membuat sebuah file kustomization.yaml dengan menggunakan SecretGenerator +$ cat <./kustomization.yaml +secretGenerator: +- name: db-user-pass + literals: + - username=admin + - password=secret +EOF +``` +Aplikasikan direktori _kustomization_ untuk membuat objek Secret. +```shell +$ kubectl apply -k . +secret/db-user-pass-dddghtt9b5 created +``` +{{< note >}} +Secret yang dihasilkan nantinya akan memiliki tambahan sufix dengan cara melakukan teknik _hashing_ +pada isi Secret tersebut. Hal ini dilakukan untuk menjamin dibuatnya sebuah Secret baru setiap kali terjadi +perubahan isi dari Secret tersebut. +{{< /note >}} + +#### Melakukan Proses _Decode_ pada Secret + +Secret dapat dibaca dengan menggunakan perintah `kubectl get secret`. +Misalnya saja, untuk membaca Secret yang dibuat pada bagian sebelumya: + +```shell +kubectl get secret mysecret -o yaml +``` +``` +apiVersion: v1 +kind: Secret +metadata: + creationTimestamp: 2016-01-22T18:41:56Z + name: mysecret + namespace: default + resourceVersion: "164619" + selfLink: /api/v1/namespaces/default/secrets/mysecret + uid: cfee02d6-c137-11e5-8d73-42010af00002 +type: Opaque +data: + username: YWRtaW4= + password: MWYyZDFlMmU2N2Rm +``` + +Kemudian lakukan mekanisme _decode_ _field_ _password_: + +```shell +echo 'MWYyZDFlMmU2N2Rm' | base64 --decode +``` +``` +1f2d1e2e67df +``` + +## Menggunakan Secret + +Secret dapat di-_mount_ sebagai _volume_ data atau dapat diekspos sebagai {{< glossary_tooltip text="variabel-variabel environment" term_id="container-env-variables" >}} +dapat digunakan di dalam Pod. Secret ini juga dapat digunakan secara langsug +oleh bagian lain dari sistem, tanpa secara langsung berkaitan dengan Pod. +Sebagai contoh, Secret dapat berisikan kredensial bagian suatu sistem lain yang digunakan +untuk berinteraksi dengan sistem eksternal yang kamu butuhkan. + +### Menggunakan Secret sebagai _File_ melalui Pod + +Berikut adalah langkah yang harus kamu penuhi agar kamu dapat menggunakan Secret di dalam _volume_ dalam sebuah Pod: + +1. Buatlah sebuah Secret, atau gunakan sebuah Secret yang sudah kamu buat sebelumnya. Beberapa Pod dapat merujuk pada sebuah Secret yang sama. +1. Modifikasi definisi Pod kamu dengan cara menambahkan sebuah _volume_ di bawah `.spec.volumes[]`. Berilah _volume_ tersebut nama, dan pastikan _field_ `.spec.volumes[].secret.secretName` merujuk pada nama yang sama dengan objek secret. +1. Tambahkan _field_ `.spec.containers[].volumeMounts[]` pada setiap container yang membutuhkan Secret. Berikan spesifikasi `.spec.containers[].volumeMounts[].readOnly = true` dan `.spec.containers[].volumeMounts[].mountPath` pada direktori dimana Secret tersebut diletakkan. +1. Modifikasi image dan/atau _command line_ kamu agar program yang kamu miliki merujuk pada _file_ di dalam direktori tersebut. Setiap _key_ pada map `data` Secret akan menjadi nama dari sebuah _file_ pada `mountPath`. + +Berikut merupakan salah satu contoh dimana sebuah Pod melakukan proses _mount_ Secret pada sebuah _volume_: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + readOnly: true + volumes: + - name: foo + secret: + secretName: mysecret +``` + +Setiap Secret yang ingin kamu gunakan harus dirujuk pada _field_ `.spec.volumes`. + +Jika terdapat lebih dari satu container di dalam Pod, +maka setiap container akan membutuhkan blok `volumeMounts`-nya masing-masing, +meskipun demikian hanya sebuah _field_ `.spec.volumes` yang dibutuhkan untuk setiap Secret. + +Kamu dapat menyimpan banyak _file_ ke dalam satu Secret, +atau menggunakan banyak Secret, hal ini tentunya bergantung pada preferensi pengguna. + +**Proyeksi _key_ Secret pada Suatu _Path_ Spesifik** + +Kita juga dapat mengontrol _path_ di dalam _volume_ di mana sebuah Secret diproyeksikan. +Kamu dapat menggunakan _field_ `.spec.volumes[].secret.items` untuk mengubah +_path_ target dari setiap _key_: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + readOnly: true + volumes: + - name: foo + secret: + secretName: mysecret + items: + - key: username + path: my-group/my-username +``` + +Apa yang akan terjadi jika kita menggunakan definisi di atas: + +* Secret `username` akan disimpan pada _file_ `/etc/foo/my-group/my-username` dan bukan `/etc/foo/username`. +* Secret `password` tidak akan diproyeksikan. + +Jika _field_ `.spec.volumes[].secret.items` digunakan, hanya _key-key_ yang dispesifikan di dalam +`items` yang diproyeksikan. Untuk mengonsumsi semua _key-key_ yang ada dari Secret, +semua _key_ yang ada harus didaftarkan pada _field_ `items`. +Semua _key_ yang didaftarkan juga harus ada di dalam Secret tadi. +Jika tidak, _volume_ yang didefinisikan tidak akan dibuat. + +**_Permission_ _File-File_ Secret** + +Kamu juga dapat menspesifikasikan mode _permission_ dari _file_ Secret yang kamu inginkan. +Jika kamu tidak menspesifikasikan hal tersebut, maka nilai _default_ yang akan diberikan adalah `0644` is used by default. +Kamu dapat memberikan mode _default_ untuk semua Secret yang ada serta melakukan mekanisme _override_ _permission_ +pada setiap _key_ jika memang diperlukan. + +Sebagai contoh, kamu dapat memberikan spesifikasi mode _default_ sebagai berikut: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + volumes: + - name: foo + secret: + secretName: mysecret + defaultMode: 256 +``` + +Kemudian, sebuah Secret akan di-_mount_ pada `/etc/foo`, selanjutnya semua _file_ +yang dibuat pada _volume_ secret tersebut akan memiliki _permission_ `0400`. + +Perhatikan bahwa spesifikasi JSON tidak mendukung notasi _octal_, dengan demikian gunakanlah +_value_ 256 untuk _permission_ 0400. Jika kamu menggunakan format YAML untuk spesifikasi Pod, +kamu dapat menggunakan notasi _octal_ untuk memberikan spesifikasi _permission_ dengan cara yang lebih +natural. + +Kamu juga dapat melakukan mekanisme pemetaan, seperti yang sudah dilakukan pada contoh sebelumnya, +dan kemudian memberikan spesifikasi _permission_ yang berbeda untuk _file_ yang berbeda. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + volumes: + - name: foo + secret: + secretName: mysecret + items: + - key: username + path: my-group/my-username + mode: 511 +``` + +Pada kasus tersebut, _file_ yang dihasilkan pada `/etc/foo/my-group/my-username` akan memiliki +_permission_ `0777`. Karena terdapat batasan pada representasi JSON, maka kamu +harus memberikan spesifikasi mode _permission_ dalam bentuk notasi desimal. + +Perhatikan bahwa _permission_ ini bida saja ditampilkan dalam bentuk notasi desimal, +hal ini akan ditampilkan pada bagian selanjutnya. + +**Mengonsumsi _Value_ dari Secret melalui Volume** + +Di dalam sebuah container dimana _volume_ secret di-_mount_, +_key_ dari Secret akan ditampilkan sebagai _file_ dan _value_ dari Secret yang berada dalam bentuk +base64 ini akan di-_decode_ dam disimpan pada _file-file_ tadi. +Berikut merupakan hasil dari eksekusi perintah di dalam container berdasarkan contoh +yang telah dipaparkan di atas: + +```shell +ls /etc/foo/ +``` +``` +username +password +``` + +```shell +cat /etc/foo/username +``` +``` +admin +``` + + +```shell +cat /etc/foo/password +``` +``` +1f2d1e2e67df +``` + +Program di dalam container bertanggung jawab untuk membaca Secret +dari _file-file_ yang ada. + +**Secret yang di-_mount_ Akan Diubah Secara Otomatis** + +Ketika sebuah Secret yang sedang digunakan di dalam _volume_ diubah, +maka _key_ yang ada juga akan diubah. Kubelet akan melakukan mekanisme pengecekan secara periodik +apakah terdapat perubahan pada Secret yang telah di-_mount_. Meskipun demikian, +proses pengecekan ini dilakukan dengan menggunakan _cache_ lokal untuk mendapatkan _value_ saat ini +dari sebuah Secret. Tipe _cache_ yang ada dapat diatur dengan menggunakan +(_field_ `ConfigMapAndSecretChangeDetectionStrategy` pada +[_struct_ KubeletConfiguration](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)). +Mekanisme ini kemudian dapat diteruskan dengan mekanisme _watch_(_default_), ttl, atau melakukan pengalihan semua +_request_ secara langsung pada kube-apiserver. +Sebagai hasilnya, _delay_ total dari pertama kali Secret diubah hingga dilakukannya mekanisme +proyeksi _key_ yang baru pada Pod berlangsung dalam jangka waktu sinkronisasi periodik kubelet + +_delay_ propagasi _cache_, dimana _delay_ propagasi _cache_ bergantung pada jenis _cache_ yang digunakan +(ini sama dengan _delay_ propagasi _watch_, ttl dari _cache_, atau nol). + +{{< note >}} +Sebuah container menggunakan Secret sebagai +[subPath](/docs/concepts/storage/volumes#using-subpath) dari _volume_ +yang di-_mount_ tidak akan menerima perubahan Secret. +{{< /note >}} + +### Menggunakan Secret sebagai Variabel _Environment_ + +Berikut merupakan langkah-langkah yang harus kamu terapkan, +untuk menggunakan secret sebagai {{< glossary_tooltip text="variabel _environment_" term_id="container-env-variables" >}} +pada sebuah Pod: + +1. Buatlah sebuah Secret, atau gunakan sebuah Secret yang sudah kamu buat sebelumnya. Beberapa Pod dapat merujuk pada sebuah Secret yang sama. +1. Modifikasi definisi Pod pada setiap container dimana kamu menginginkan container tersebut dapat mengonsumsi your Pod definition in each container that you wish to consume the value of a secret key to add an environment variabele for each secret key you wish to consume. The environment variabele that consumes the secret key should populate the secret's name and key in `env[].valueFrom.secretKeyRef`. +1. Modifikasi _image_ dan/atau _command line_ kamu agar program yang kamu miliki merujuk pada _value_ yang sudah didefinisikan pada variabel _environment_. + +Berikut merupakan contoh dimana sebuah Pod menggunakan Secret sebagai variabel _environment_: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: secret-env-pod +spec: + containers: + - name: mycontainer + image: redis + env: + - name: SECRET_USERNAME + valueFrom: + secretKeyRef: + name: mysecret + key: username + - name: SECRET_PASSWORD + valueFrom: + secretKeyRef: + name: mysecret + key: password + restartPolicy: Never +``` + +**Menggunakan Secret dari Variabel _Environment_** + +Di dalam sebuah container yang mengkonsumsi Secret pada sebuah variabel _environment_, _key_ dari sebuah secret +akan ditampilkan sebagai variabel _environment_ pada umumnya dengan _value_ berupa informasi yang telah di-_decode_ +ke dalam base64. Berikut merupakan hasil yang didapatkan apabila perintah-perintah di bawah ini +dijalankan dari dalam container yang didefinisikan di atas: + +```shell +echo $SECRET_USERNAME +``` +``` +admin +``` +```shell +echo $SECRET_PASSWORD +``` +``` +1f2d1e2e67df +``` + +### Menggunakan imagePullSecrets + +Sebuah `imagePullSecret` merupakan salah satu cara yang dapat digunakan untuk menempatkan secret +yang mengandung _password_ dari registri Docker (atau registri _image_ lainnya) +pada Kubelet, sehingga Kubelet dapat mengunduh _image_ dan menempatkannya pada Pod. + +**Memberikan spesifikasi manual dari sebuah imagePullSecret** + +Penggunaan imagePullSecrets dideskripsikan di dalam [dokumentasi _image_](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) + +### Mekanisme yang Dapat Diterapkan agar imagePullSecrets dapat Secara Otomatis Digunakan + +Kamu dapat secara manual membuat sebuah imagePullSecret, serta merujuk imagePullSecret +yang sudah kamu buat dari sebuah serviceAccount. Semua Pod yang dibuat dengan menggunakan +serviceAccount tadi atau serviceAccount _default_ akan menerima _field_ imagePullSecret dari +serviceAccount yang digunakan. +Bacalah [Cara menambahkan ImagePullSecrets pada sebuah _service account_](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) +untuk informasi lebih detail soal proses yang dijalankan. + +### Mekanisme _Mounting_ Otomatis dari Secret yang Sudah Dibuat + +Secret yang dibuat secara manual (misalnya, secret yang mengandung token yang dapat digunakan +untuk mengakses akun GitHub) dapat di-_mount_ secara otomatis pada sebuah Pod berdasarkan _service account_ +yang digunakan oleh Pod tadi. +Baca [Bagaimana Penggunaan PodPreset untuk Memasukkan Informasi ke Dalam Pod](/docs/tasks/inject-data-application/podpreset/) untuk informasi lebih lanjut. + +## Detail + +### Batasan-Batasan + +Sumber dari _secret volume_ akan divalidasi untuk menjamin rujukan pada +objek yang dispesifikasikan mengarah pada objek dengan _type_ `Secret`. +Oleh karenanya, sebuah _secret_ harus dibuat sebelum Pod yang merujuk pada _secret_ +tersebut dibuat. + +Sebuah objek API Secret berada di dalam sebuah {{< glossary_tooltip text="namespace" term_id="namespace" >}}. +Objek-objek ini hanya dapat dirujuk oleh Pod-Pod yang ada pada namespace yang sama. + +Secret memiliki batasi dalam hal ukuran maksimalnya yaitu hanya sampai 1MiB per objek. +Oleh karena itulah, pembuatan secret dalam ukuran yang sangat besar tidak dianjurkan +karena dapat menghabiskan sumber daya apiserver dan memori kubelet. Meskipun demikian, +pembuatan banyak secret dengan ukuran kecil juga dapat menhabiskan memori. Pembatasan +sumber daya yang diizinkan untuk pembuatan secret merupakan salah satu fitur tambahan +yang direncanakan kedepannya. + +Kubelet hanya mendukung penggunaan secret oleh Pod apabila Pod tersebut +didapatkan melalui apiserver. Hal ini termasuk Pod yang dibuat dengan menggunakan +kubectl, atau secara tak langsung melalui _replication controller_. Hal ini tidak +termasuk Pod yang dibuat melalui _flag_ `--manifest-url` yang ada pada kubelet, +maupun REST API yang disediakan (hal ini bukanlah merupakan mekanisme umum yang dilakukan +untuk membuat sebuah Pod). + +Secret harus dibuat sebelum digunakan oleh Pod sebagai variabel _environment_, +kecuali apabila variabel _environment_ ini dianggap opsional. Rujukan pada Secret +yang tidak dapat dipenuhi akan menyebabkan Pod gagal _start_. + +Rujukan melalui `secretKeyRef` pada _key_ yang tidak ada pada _named_ Secret +akan akan menyebabkan Pod gagal _start_. + +Secret yang digunakan untuk memenuhi variabel _environment_ melalui `envFrom` yang +memiliki _key_ yang dianggap memiliki penamaan yang tidak valid akan diabaikan. +Hal ini akan akan menyebabkan Pod gagal _start_. Selanjutnya akan terdapat _event_ +dengan alasan `InvalidvariabeleNames` dan pesan yang berisikan _list_ dari +_key_ yang diabaikan akibat penamaan yang tidak valid. Contoh yang ada akan menunjukkan +sebuah pod yang merujuk pada secret `default/mysecret` yang mengandung dua buah _key_ +yang tidak valid, yaitu 1badkey dan 2alsobad. + +```shell +kubectl get events +``` +``` +LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON +0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentvariabeleNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variabele names. +``` + +### Interaksi Secret dan Pod Lifetime + +Ketika sebuah pod dibuat melalui API, tidak terdapat mekanisme pengecekan +yang digunakan untuk mengetahui apakah sebuah Secret yang dirujuk sudah dibuat +atau belum. Ketika sebuah Pod di-_schedule_, kubelet akan mencoba mengambil +informasi mengenai _value_ dari secret tadi. Jika secret tidak dapat diambil +_value_-nya dengan alasan temporer karena hilangnya koneksi ke API server atau +secret yang dirujuk tidak ada, kubelet akan melakukan mekanisme _retry_ secara periodik. +Kubelet juga akan memberikan laporan mengenai _event_ yang terjadi pada Pod serta alasan +kenapa Pod tersebut belum di-_start_. Apabila Secret berhasil didapatkan, kubelet +akan membuat dan me-_mount_ volume yang mengandung secret tersebut. Tidak akan ada +container dalam pod yang akan di-_start_ hingga semua volume pod berhasil di-_mount_. + +## Contoh-Contoh Penggunaan + +### Contoh Penggunaan: Pod dengan _ssh key_ + +Buatlah sebuah kustomization.yaml dengan SecretGenerator yang mengandung beberapa _ssh key_: + +```shell +kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub +``` + +``` +secret "ssh-key-secret" created +``` + +{{< caution >}} +Pikirkanlah terlebih dahulu sebelum kamu menggunakan _ssh key_ milikmu sendiri: pengguna lain pada kluster tersebut bisa saja memiliki akses pada secret yang kamu definisikan. +Gunakanlah service account untuk membagi informasi yang kamu inginkan di dalam kluster tersebut, dengan demikian kamu dapat membatalkan service account tersebut apabila secret tersebut disalahgunakan. +{{< /caution >}} + + +Sekarang, kita dapat membuat sebuah pod yang merujuk pada secret dengan _ssh key_ yang sudah +dibuat tadi serta menggunakannya melalui sebuah volume yang di-_mount_: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: secret-test-pod + labels: + name: secret-test +spec: + volumes: + - name: secret-volume + secret: + secretName: ssh-key-secret + containers: + - name: ssh-test-container + image: mySshImage + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +``` + +Ketika sebuah perintah dijalankan di dalam container, bagian dari _key_ tadi akan +terdapat pada: + +```shell +/etc/secret-volume/ssh-publickey +/etc/secret-volume/ssh-privatekey +``` + +container kemudian dapat menggunakan secret secara bebas untuk +membuat koneksi ssh. + +### Contoh Penggunaan: Pod dengan kredensial prod / test + +Contoh ini memberikan ilustrasi pod yang mengonsumsi secret yang mengandung +kredensial dari _environment_ _production_ atau _environment_ _test_. + +Buatlah suatu kustomization.yaml dengan SecretGenerator + +```shell +kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11 +``` +``` +secret "prod-db-secret" created +``` + +```shell +kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests +``` +``` +secret "test-db-secret" created +``` +{{< note >}} +Karakter spesial seperti `$`, `\*`, dan `!` membutuhkan mekanisme _escaping_. +Jika password yang kamu gunakan memiliki karakter spesial, kamu dapat melakukan mekanisme _escape_ +dengan karakter `\\` character. Sebagai contohnya, jika _password_ kamu yang sebenarnya adalah +`S!B\*d$zDsb`, maka kamu harus memanggil perintah eksekusi dengan cara sebagai berikut: + +```shell +kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password=S\\!B\\\*d\\$zDsb +``` + +Kamu tidak perlu melakukan mekanisme _escape_ karakter apabila menggunakan opsi melalui _file_ (`--from-file`). +{{< /note >}} + +Kemudian buatlah Pod-Pod yang dibutuhkan: + +```shell +$ cat < pod.yaml +apiVersion: v1 +kind: List +items: +- kind: Pod + apiVersion: v1 + metadata: + name: prod-db-client-pod + labels: + name: prod-db-client + spec: + volumes: + - name: secret-volume + secret: + secretName: prod-db-secret + containers: + - name: db-client-container + image: myClientImage + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +- kind: Pod + apiVersion: v1 + metadata: + name: test-db-client-pod + labels: + name: test-db-client + spec: + volumes: + - name: secret-volume + secret: + secretName: test-db-secret + containers: + - name: db-client-container + image: myClientImage + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +EOF +``` + +Tambahkan Pod-Pod terkait pada _file_ kustomization.yaml yang sama +```shell +$ cat <> kustomization.yaml +resources: +- pod.yaml +EOF +``` + +Terapkan semua perubahan pada objek-objek tadi ke Apiserver dengan menggunakan + +```shell +kubectl apply --k . +``` + +Kedua container kemudian akan memiliki _file-file_ berikut ini di dalam +_filesystem_ keduanya dengan _value_ sebagai berikut untuk masing-masing _environment_: + +```shell +/etc/secret-volume/username +/etc/secret-volume/password +``` + +Perhatikan bahwa _specs_ untuk kedua pod berbeda hanya pada satu _field_ saja; +hal ini bertujuan untuk memfasilitasi adanya kapabilitas yang berbeda dari templat +konfigurasi umum yang tersedia. + +Kamu dapat mensimplifikasi spesifikasi dasar Pod dengan menggunakan dua buah _service account_ yang berbeda: +misalnya saja salah satunya disebut sebagai `prod-user` dengan `prod-db-secret`, dan satunya lagi disebut +`test-user` dengan `test-db-secret`. Kemudian spesifikasi Pod tadi dapat diringkas menjadi: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: prod-db-client-pod + labels: + name: prod-db-client +spec: + serviceAccount: prod-db-client + containers: + - name: db-client-container + image: myClientImage +``` + +### Contoh Penggunaan: _Dotfiles_ pada volume secret + +Dengan tujuan membuat data yang ada 'tersembunyi' (misalnya, di dalam sebuah _file_ dengan nama yang dimulai +dengan karakter titik), kamu dapat melakukannya dengan cara yang cukup sederhana, yaitu cukup dengan membuat +karakter awal _key_ yang kamu inginkan dengan titik. Contohnya, ketika sebuah secret di bawah ini di-_mount_ +pada sebuah volume: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: dotfile-secret +data: + .secret-file: dmFsdWUtMg0KDQo= +--- +apiVersion: v1 +kind: Pod +metadata: + name: secret-dotfiles-pod +spec: + volumes: + - name: secret-volume + secret: + secretName: dotfile-secret + containers: + - name: dotfile-test-container + image: k8s.gcr.io/busybox + command: + - ls + - "-l" + - "/etc/secret-volume" + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" +``` + + +Volume `secret-volume` akan mengandung sebuah _file_, yang disebut sebagai `.secret-file`, serta +container `dotfile-test-container` akan memiliki _file_ konfigurasinya pada _path_ +`/etc/secret-volume/.secret-file`. + +{{< note >}} +_File-file_ yang diawali dengan karakter titik akan "tersembunyi" dari keluaran perintah `ls -l`; +kamu harus menggunakan perintah `ls -la` untuk melihat _file-file_ tadi dari sebuah direktori. +{{< /note >}} + +### Contoh Penggunaan: Secret yang dapat diakses hanya pada salah satu container di dalam pod + +Misalkan terdapat sebuah program yang memiliki kebutuhan untuk menangani _request_ HTTP, +melakukan logika bisnis yang kompleks, serta kemudian menandai beberapa _message_ yang ada +dengan menggunakan HMAC. Karena program ini memiliki logika aplikasi yang cukup kompleks, +maka bisa jadi terdapat beberapa celah terjadinya eksploitasi _remote_ _file_ pada server, +yang nantinya bisa saja mengekspos _private key_ yang ada pada _attacker_. + +Hal ini dapat dipisah menjadi dua buah proses yang berbeda di dalam dua container: +sebuah container _frontend_ yang menangani interaksi pengguna dan logika bisnis, tetapi +tidak memiliki kapabilitas untuk melihat _private key_; container lain memiliki kapabilitas +melihat _private key_ yang ada dan memiliki fungsi untuk menandai _request_ yang berasal +dari _frontend_ (melalui jaringan _localhost_). + +Dengan strategi ini, seorang _attacker_ harus melakukan teknik tambahan +untuk memaksa aplikasi melakukan hal yang acak, yang kemudian menyebabkan +mekanisme pembacaan _file_ menjadi lebih susah. + + + +## _Best practices_ + +### Klien yang menggunakan API secret + +Ketika men-_deploy_ aplikasi yang berinteraksi dengan API secret, akses yang dilakukan +haruslah dibatasi menggunakan [_policy_ autorisasi]( +/docs/reference/access-authn-authz/authorization/) seperti [RBAC]( +/docs/reference/access-authn-authz/rbac/). + +Secret seringkali menyimpan _value_ yang memiliki jangkauan spektrum +kepentingan, yang mungkin saja dapat menyebabkan terjadinya eskalasi baik +di dalam Kubernetes (misalnya saja token dari sebuah _service account_) maupun +sistem eksternal. Bahkan apabila setiap aplikasi secara individual memiliki +kapabilitas untuk memahami tingkatan yang dimilikinya untuk berinteraksi dengan secret tertentu, +aplikasi lain dalam namespace itu bisa saja menyebabkan asumsi tersebut menjadi tidak valid. + +Karena alasan-alasan yang sudah disebutkan tadi _request_ `watch` dan `list` untuk sebuah +secret di dalam suatu namespace merupakan kapabilitas yang sebisa mungkin harus dihindari, +karena menampilkan semua secret yang ada berimplikasi pada akses untuk melihat isi yang ada +pada secret yang ada. Kapabilitas untuk melakukan _request_ `watch` dan `list` pada semua secret di kluster +hanya boleh dimiliki oleh komponen pada sistem level yang paling _previleged_. + +Aplikasi yang membutuhkan akses ke API secret harus melakukan _request_ `get` pada +secret yang dibutuhkan. Hal ini memungkinkan administrator untuk membatasi +akses pada semua secret dengan tetap memberikan [akses pada instans secret tertentu](/docs/reference/access-authn-authz/rbac/#referring-to-resources) +yang dibutuhkan aplikasi. + +Untuk meningkatkan performa dengan menggunakan iterasi `get`, klien dapat mendesain +sumber daya yang merujuk pada suatu secret dan kemudian melakukan `watch` pada secret tersebut, +serta melakukan _request_ secret ketika terjadi perubahan pada rujukan tadi. Sebagai tambahan, [API "bulk watch"]( +https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/bulk_watch.md) +yang dapat memberikan kapabilitas `watch` individual pada sumber daya melalui klien juga sudah direncanakan, +dan kemungkinan akan diimplementasikan dirilis Kubernetes selanjutnya. + +## Properti Keamanan + + +### Proteksi + +Karena objek `secret` dapat dibuat secara independen dengan `pod` yang menggunakannya, +risiko tereksposnya secret di dalam workflow pembuatan, pemantauan, serta pengubahan pod. +Sistem yang ada juga dapat memberikan tindakan pencegahan ketika berinteraksi dengan `secret`, +misalnya saja tidak melakukan penulisan isi `secret` ke dalam disk apabila hal tersebut +memungkinkan. + +Sebuah secret hanya diberikan pada node apabila pod yang ada di dalam node +membutuhkan secret tersebut. Kubelet menyimpan secret yang ada pada `tmpfs` +sehingga secret tidak ditulis pada disk. Setelah pod yang bergantung pada secret tersebut dihapus, +maka kubelet juga akan menghapus salinan lokal data secret. + +Di dalam sebuah node bisa saja terdapat beberapa secret yang dibutuhkan +oleh pod yang ada di dalamnya. Meskipun demikian, hanya secret yang di-_request_ +oleh sebuah pod saja yang dapat dilihat oleh container yang ada di dalamnya. +Dengan demikian, sebuah Pod tidak memiliki akses untuk melihat secret yang ada +pada pod yang lain. + +Di dalam sebuah pod bisa jadi terdapat beberapa container. +Meskipun demikian, agar sebuah container bisa mengakses _volume secret_, container +tersebut haruslah mengirimkan _request_ `volumeMounts` yang ada dapat diakses dari +container tersebut. Pengetahuan ini dapat digunakan untuk membentuk [partisi security +pada level pod](#contoh-penggunaan-secret-yang-dapat-diakses-hanya-pada-salah-satu-container-di-dalam-pod). + +Pada sebagian besar distribusi yang dipelihara projek Kubernetes, +komunikasi antara pengguna dan apiserver serta apisserver dan kubelet dilindungi dengan menggunakan SSL/TLS. +Dengan demikian, secret dalam keadaan dilindungi ketika ditransmisi. + +{{< feature-state for_k8s_version="v1.13" state="beta" >}} + +Kamu dapat mengaktifkan [enkripsi pada rest](/docs/tasks/administer-cluster/encrypt-data/) +untuk data secret, sehingga secret yang ada tidak akan ditulis ke dalam {{< glossary_tooltip term_id="etcd" >}} +dalam keadaan tidak terenkripsi. + +### Resiko + + - Pada API server, data secret disimpan di dalam {{< glossary_tooltip term_id="etcd" >}}; + dengan demikian: + - Administrator harus mengaktifkan enkripsi pada rest untuk data kluster (membutuhkan versi v1.13 atau lebih) + - Administrator harus membatasi akses etcd pada pengguna dengan kapabilitas admin + - Administrator bisa saja menghapus data disk yang sudah tidak lagi digunakan oleh etcd + - Jika etcd dijalankan di dalam kluster, administrator harus memastikan SSL/TLS + digunakan pada proses komunikasi peer-to-peer etcd. + - Jika kamu melakukan konfigurasi melalui sebuah _file_ manifest (JSON or YAML) + yang menyimpan data secret dalam bentuk base64, membagi atau menyimpan secret ini + dalam repositori kode sumber sama artinya dengan memberikan informasi mengenai data secret. + Mekanisme _encoding_ base64 bukanlah merupakan teknik enkripsi dan nilainya dianggap sama saja dengan _plain text_. + - Aplikasi masih harus melindungi _value_ dari secret setelah membaca nilainya dari suatu volume + dengan demikian risiko terjadinya _logging_ secret secara tidak engaja dapat dihindari. + - Seorang pengguna yang dapat membuat suatu pod yang menggunakan secret, juga dapat melihat _value_ secret. + Bahkan apabila _policy_ apiserver tidak memberikan kapabilitas untuk membaca objek secret, pengguna + dapat menjalankan pod yang mengekspos secret. + - Saat ini, semua orang dengan akses _root_ pada node dapat membaca secret _apapun_ dari apiserver, + dengan cara meniru kubelet. Meskipun begitu, terdapat fitur yang direncanakan pada rilis selanjutnya yang memungkinkan pengiriman secret hanya dapat + mengirimkan secret pada node yang membutuhkan secret tersebut untuk membatasi adanya eksploitasi akses _root_ pada node ini. + +{{% capture whatsnext %}} + +{{% /capture %}} diff --git a/content/id/docs/concepts/containers/images.md b/content/id/docs/concepts/containers/images.md new file mode 100644 index 0000000000000..59f980a35ae65 --- /dev/null +++ b/content/id/docs/concepts/containers/images.md @@ -0,0 +1,370 @@ +--- +title: Image +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} + +Kamu membuat Docker _image_ dan mengunduhnya ke sebuah registri sebelum digunakan di dalam Kubernetes Pod. + +Properti `image` dari sebuah Container mendukung sintaksis yang sama seperti perintah `docker`, termasuk registri privat dan _tag_. + +{{% /capture %}} + + +{{% capture body %}} + +## Memperbarui Image + +Kebijakan _pull default_ adalah `IfNotPresent` yang membuat Kubelet tidak +lagi mengunduh (_pull_) sebuah image jika sudah ada terlebih dahulu. Jika kamu ingin agar +selalu diunduh, kamu bisa melakukan salah satu dari berikut: + +- mengatur `imagePullPolicy` dari Container menjadi `Always`. +- buang `imagePullPolicy` dan gunakan `:latest` _tag_ untuk _image_ yang digunakan. +- buang `imagePullPolicy` dan juga _tag_ untuk _image_. +- aktifkan [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) _admission controller_. + +Harap diingat kamu sebaiknya hindari penggunaan _tag_ `:latest`, lihat [panduan konfigurasi](/docs/concepts/configuration/overview/#container-images) untuk informasi lebih lanjut. + +## Membuat Image Multi-arsitektur dengan Manifest + +Docker CLI saat ini mendukung perintah `docker manifest` dengan anak perintah `create`, `annotate`, dan `push`. Perintah-perintah ini dapat digunakan +untuk membuat (_build_) dan mengunggah (_push_) manifes. Kamu dapat menggunakan perintah `docker manifest inspect` untuk membaca manifes. + +Lihat dokumentasi docker di sini: +https://docs.docker.com/edge/engine/reference/commandline/manifest/ + +Lihat contoh-contoh bagaimana kami menggunakan ini untuk proses _build harness_: +https://cs.k8s.io/?q=docker%20manifest%20(create%7Cpush%7Cannotate)&i=nope&files=&repos= + +Perintah-perintah ini bergantung pada Docker CLI, dan diimplementasi hanya di sisi CLI. Kamu harus mengubah `$HOME/.docker/config.json` dan mengatur _key_ `experimental` untuk mengaktifkan +atau cukup dengan mengatur `DOCKER_CLI_EXPERIMENTAL` variabel _environment_ menjadi `enabled` ketika memanggil perintah-perintah CLI. + +{{< note >}} +Gunakan Docker *18.06 ke atas*, versi-versi di bawahnya memiliki _bug_ ataupun tidak mendukung perintah eksperimental. Contohnya https://github.com/docker/cli/issues/1135 yang menyebabkan masalah di bawah containerd. +{{< /note >}} + +Kalau kamu terkena masalah ketika mengunggah manifes-manifes yang rusak, cukup bersihkan manifes-manifes yang lama di `$HOME/.docker/manifests` untuk memulai dari awal. + +Untuk Kubernetes, kami biasanya menggunakan _image-image_ dengan sufiks `-$(ARCH)`. Untuk kompatibilitas (_backward compatibility_), lakukan _generate image-image_ yang lama dengan sufiks. Idenya adalah men-_generate_, misalnya `pause` image yang memiliki manifes untuk semua arsitektur dan misalnya `pause-amd64` yang punya kompatibilitas terhadap konfigurasi-konfigurasi lama atau berkas-berkas YAML yang bisa saja punya _image-image_ bersufiks yang di-_hardcode_. + +## Menggunakan Registri Privat (_Private Registry_) {#menggunakan-registri-privat} + +Biasanya kita memerlukan _key_ untuk membaca _image-image_ yang tersedia pada suatu registri privat. +Kredensial ini dapat disediakan melalui beberapa cara: + + - Menggunakan Google Container Registry + - per-klaster + - konfigurasi secara otomatis pada Google Compute Engine atau Google Kubernetes Engine + - semua Pod dapat membaca registri privat yang ada di dalam proyek + - Menggunakan Amazon Elastic Container Registry (ECR) + - menggunakan IAM _role_ dan _policy_ untuk mengontrol akses ke repositori ECR + - secara otomatis _refresh_ kredensial login ECR + - Menggunakan Oracle Cloud Infrastructure Registry (OCIR) + - menggunakan IAM _role_ dan _policy_ untuk mengontrol akses ke repositori OCIR + - Menggunakan Azure Container Registry (ACR) + - Menggunakan IBM Cloud Container Registry + - menggunakan IAM _role_ dan _policy_ untuk memberikan akses ke IBM Cloud Container Registry + - Konfigurasi Node untuk otentikasi registri privat + - semua Pod dapat membaca registri privat manapun + - memerlukan konfigurasi Node oleh admin klaster + - Pra-unduh _image_ + - semua Pod dapat menggunakan _image_ apapun yang di-_cached_ di dalam sebuah Node + - memerlukan akses root ke dalam semua Node untuk pengaturannya + - Mengatur ImagePullSecrets dalam sebuah Pod + - hanya Pod-Pod yang menyediakan _key_ sendiri yang dapat mengakses registri privat + +Masing-masing opsi dijelaskan lebih lanjut di bawah ini. + +### Menggunakan Google Container Registry + +Kubernetes memiliki dukungan _native_ untuk [Google Container +Registry (GCR)](https://cloud.google.com/tools/container-registry/), ketika dijalankan pada +Google Compute Engine (GCE). Jika kamu menjalankan klaster pada GCE atau Google Kubernetes Engine, +cukup gunakan nama panjang _image_ (misalnya gcr.io/my_project/image:tag). + +Semua Pod di dalam klaster akan memiliki akses baca _image_ di registri ini. + +Kubelet akan melakukan otentikasi GCR menggunakan _service account_ yang dimiliki +_instance_ Google. _Service acccount_ pada _instance_ akan memiliki sebuah `https://www.googleapis.com/auth/devstorage.read_only`, +sehingga dapat mengunduh dari GCR di proyek yang sama, tapi tidak untuk unggah. + +### Menggunakan Amazon Elastic Container Registry + +Kubernetes memiliki dukungan _native_ untuk [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/), ketika Node adalah +AWS EC2 _instance_. + +Cukup gunakan nama panjang _image_ (misalnya `ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag`) di dalam definisi Pod. + +Semua pengguna klaster yang dapat membuat Pod akan bisa menjalankan Pod yang dapat menggunakan +_image-image_ di dalam registri ECR. + +Kubelet akan mengambil dan secara periodik memperbarui kredensial ECR, yang memerlukan _permission_ sebagai berikut: + +- `ecr:GetAuthorizationToken` +- `ecr:BatchCheckLayerAvailability` +- `ecr:GetDownloadUrlForLayer` +- `ecr:GetRepositoryPolicy` +- `ecr:DescribeRepositories` +- `ecr:ListImages` +- `ecr:BatchGetImage` + +Persyaratan: + +- Kamu harus menggunakan versi kubelet `v1.2.0` atau lebih (misal jalankan `/usr/bin/kubelet --version=true`). +- Jika Node yang kamu miliki ada di region A dan registri kamu ada di region yang berbeda misalnya B, kamu perlu versi `v1.3.0` atau lebih. +- ECR harus tersedia di region kamu. + +Cara _troubleshoot_: + +- Verifikasi semua persyaratan di atas. +- Dapatkan kredensial $REGION (misalnya `us-west-2`) pada _workstation_ kamu. Lakukan SSH ke dalam _host_ dan jalankan Docker secara manual menggunakan kredensial tersebut. Apakah berhasil? +- Tambahkan verbositas level _log_ kubelet paling tidak 3 dan periksa _log_ kubelet (misal `journalctl -u kubelet`) di baris-baris yang seperti ini: + - `aws_credentials.go:109] unable to get ECR credentials from cache, checking ECR API` + - `aws_credentials.go:116] Got ECR credentials from ECR API for .dkr.ecr..amazonaws.com` + +### Menggunakan Azure Container Registry (ACR) +Ketika menggunakan [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) +kamu dapat melakukan otentikasi menggunakan pengguna admin maupun sebuah _service principal_. +Untuk keduanya, otentikasi dilakukan melalui proses otentikasi Docker standar. Instruksi-instruksi ini +menggunakan perangkat [azure-cli](https://github.com/azure/azure-cli). + +Kamu pertama perlu membuat sebuah registri dan men-_generate_ kredensial, dokumentasi yang lengkap tentang hal ini +dapat dilihat pada [dokumentasi Azure container registry](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli). + +Setelah kamu membuat registri, kamu akan menggunakan kredensial berikut untuk login: + + * `DOCKER_USER` : _service principal_, atau pengguna admin + * `DOCKER_PASSWORD`: kata sandi dari _service principal_, atau kata sandi dari pengguna admin + * `DOCKER_REGISTRY_SERVER`: `${some-registry-name}.azurecr.io` + * `DOCKER_EMAIL`: `${some-email-address}` + +Ketika kamu sudah memiliki variabel-variabel di atas, kamu dapat +[mengkonfigurasi sebuah Kubernetes Secret dan menggunakannya untuk _deploy_ sebuah Pod](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod). + +### Menggunakan IBM Cloud Container Registry +IBM Cloud Container Registry menyediakan sebuah registri _image_ privat yang _multi-tenant_, dapat kamu gunakan untuk menyimpan dan membagikan _image-image_ secara aman. Secara _default_, _image-image_ di dalam registri privat kamu akan dipindai (_scan_) oleh Vulnerability Advisor terintegrasi untuk deteksi isu +keamanan dan kerentanan (_vulnerability_) yang berpotensi. Para pengguna di dalam akun IBM Cloud kamu dapat mengakses _image_, atau kamu dapat menggunakan IAM +_role_ dan _policy_ untuk memberikan akses ke _namespace_ di IBM Cloud Container Registry. + +Untuk instalasi _plugin_ CLI di IBM Cloud Containerr Registry dan membuat sebuah _namespace_ untuk _image-image_ kamu, lihat [Mulai dengan IBM Cloud Container Registry](https://cloud.ibm.com/docs/Registry?topic=registry-getting-started). + +Jika kamu menggunakan akun dan wilayah (_region_) yang sama, kamu dapat melakukan _deploy image-image_ yang disimpan di dalam IBM Cloud Container Registry +ke dalam _namespace default_ dari klaster IBM Cloud Kubernetes Service yang kamu miliki tanpa konfigurasi tambahan, lihat [Membuat kontainer dari _image_](https://cloud.ibm.com/docs/containers?topic=containers-images). Untuk opsi konfigurasi lainnya, lihat [Bagaimana cara mengotorasi klaster untuk mengunduh _image_ dari sebuah registri](https://cloud.ibm.com/docs/containers?topic=containers-registry#cluster_registry_auth). + +### Konfigurasi Node untuk Otentikasi ke sebuah Registri Privat + +{{< note >}} +Jika kamu jalan di Google Kubernetes Engine, akan ada `.dockercfg` pada setiap Node dengan kredensial untuk Google Container Registry. Kamu tidak bisa menggunakan cara ini. +{{< /note >}} + +{{< note >}} +Jika kamu jalan di AWS EC2 dan menggunakan EC2 Container Registry (ECR), kubelet pada setiap Node akan dapat +mengatur dan memperbarui kredensial login ECR. Kamu tidak bisa menggunakan cara ini. +{{< /note >}} + +{{< note >}} +Cara ini cocok jika kamu dapat mengontrol konfigurasi Node. Cara ini tidak akan bekerja dengan baik pada GCE, +dan penyedia layanan cloud lainnya yang tidak melakukan penggantian Node secara otomatis. +{{< /note >}} + +{{< note >}} +Kubernetes pada saat ini hanya mendukung bagian `auths` dan `HttpHeaders` dari konfigurasi docker. Hal ini berarti bantuan kredensial (`credHelpers` atau `credsStore`) tidak didukung. +{{< /note >}} + + +Docker menyimpan _key_ untuk registri privat pada `$HOME/.dockercfg` atau berkas `$HOME/.docker/config.json`. Jika kamu menempatkan berkas yang sama +pada daftar jalur pencarian (_search path_) berikut, kubelet menggunakannya sebagai penyedia kredensial saat mengunduh _image_. + +* `{--root-dir:-/var/lib/kubelet}/config.json` +* `{cwd of kubelet}/config.json` +* `${HOME}/.docker/config.json` +* `/.docker/config.json` +* `{--root-dir:-/var/lib/kubelet}/.dockercfg` +* `{cwd of kubelet}/.dockercfg` +* `${HOME}/.dockercfg` +* `/.dockercfg` + +{{< note >}} +Kamu mungkin harus mengatur `HOME=/root` secara eksplisit pada berkas _environment_ kamu untuk kubelet. +{{< /note >}} + +Berikut langkah-langkah yang direkomendasikan untuk mengkonfigurasi Node kamu supaya bisa menggunakan registri privat. +Pada contoh ini, coba jalankan pada _desktop/laptop_ kamu: + + 1. Jalankan `docker login [server]` untuk setiap set kredensial yang ingin kamu gunakan. Ini akan memperbarui `$HOME/.docker/config.json`. + 1. Lihat `$HOME/.docker/config.json` menggunakan _editor_ untuk memastikan sudah berisi kredensial yang ingin kamu gunakan. + 1. Dapatkan daftar Node, contohnya: + - jika kamu ingin mendapatkan nama: `nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')` + - jika kamu ingin mendapatkan IP: `nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')` + 1. Salin `.docker/config.json` yang ada di lokal kamu pada salah satu jalur pencarian di atas. + - contohnya: `for n in $nodes; do scp ~/.docker/config.json root@$n:/var/lib/kubelet/config.json; done` + +Verifikasi dengana membuat sebuah Pod yanag menggunakan _image_ privat, contohnya: + +```shell +kubectl apply -f - <}} +Jika kamu jalan di Google Kubernetes Engine, maka akan ada `.dockercfg` pada setiap Node dengan kredensial untuk Google Container Registry. Kamu dapat menggunakan cara ini. +{{< /note >}} + +{{< note >}} +Cara ini cocok jika kamu dapat mengontrol konfigurasi Node. Cara ini tidak akan +bisa berjalan dengan baik pada GCE, dan penyedia cloud lainnya yang tidak menggantikan +Node secara otomatis. +{{< /note >}} + +Secara _default_, kubelet akan mencoba untuk mengunduh setiap _image_ dari registri yang dispesifikasikan. +Hanya saja, jika properti `imagePullPolicy` diatur menjadi `IfNotPresent` atau `Never`, maka +sebuah _image_ lokal digunakan. + +Jika kamu ingin memanfaatkan _image_ pra-unduh sebagai pengganti untuk otentikasi registri, +kamu harus memastikan semua Node di dalam klaster memiliki _image_ pra-unduh yang sama. + +Cara ini bisa digunakan untuk memuat _image_ tertentu untuk kecepatan atau sebagai alternatif untuk otentikasi untuk sebuah registri privat. + +Semua Pod akan mendapatkan akses baca ke _image_ pra-unduh manapun. + +### Tentukan ImagePullSecrets pada sebuah Pod + +{{< note >}} +Cara ini merupakan cara yang direkomendasikan saat ini untuk Google Kubernetes Engine, GCE, dan penyedia cloud lainnya yang +secara otomatis dapat membuat Node. +{{< /note >}} + +Kubernetes mendukung penentuan _key_ registri pada sebuah Pod. + +#### Membuat sebuah Secret dengan Docker Config + +Jalankan perintah berikut, ganti nilai huruf besar dengan yang tepat: + +```shell +kubectl create secret docker-registry --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL +``` + +Jika kamu sudah memiliki berkas kredensial Docker, daripada menggunakan perintah di atas, +kamu dapat mengimpor berkas kredensial sebagai Kubernetes Secret. +[Membuat sebuah Secret berbasiskan pada kredensial Docker yang sudah ada](/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials) menjelaskan bagaimana mengatur ini. +Cara ini berguna khususnya jika kamu menggunakan beberapa registri kontainer privat, +perintah `kubectl create secret docker-registry` akan membuat sebuah Secret yang akan +hanya bekerja menggunakan satu registri privat. + +{{< note >}} +Pod-Pod hanya dapat mengacu pada imagePullSecrets di dalam _namespace_, +sehingga proses ini perlu untuk diselesaikan satu kali setiap _namespace_. +{{< /note >}} + +#### Mengacu pada imagePullSecrets di dalam sebuah Pod + +Sekarang, kamu dapat membuat Pod yang mengacu pada Secret dengan menambahkan bagian `imagePullSecrets` +untuk sebuah definisi Pod. + +```shell +cat < pod.yaml +apiVersion: v1 +kind: Pod +metadata: + name: foo + namespace: awesomeapps +spec: + containers: + - name: foo + image: janedoe/awesomeapp:v1 + imagePullSecrets: + - name: myregistrykey +EOF + +cat <> ./kustomization.yaml +resources: +- pod.yaml +EOF +``` + +Cara ini perlu untuk diselesaikan untuk setiap Pod yang mengguunakan registri privat. + +Hanya saja, mengatur _field_ ini dapat diotomasi dengan mengatur imagePullSecrets di dalam +sumber daya [serviceAccount](/docs/user-guide/service-accounts). +Periksa [Tambahan ImagePullSecrets untuk sebuah Service Account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) untuk instruksi yang lebih detail. + +Kamu dapat menggunakan cara ini bersama `.docker/config.json` pada setiap Node. Kredensial-kredensial +akan dapat di-_merged_. Cara ini akan dapat bekerja pada Google Kubernetes Engine. + +### Kasus-Kasus Penggunaan (_Use Case_) + +Ada beberapa solusi untuk konfigurasi registri privat. Berikut beberapa kasus penggunaan +dan solusi yang disarankan. + +1. Klaster yang hanya menjalankan _image non-proprietary_ (misalnya open-source). Tidak perlu unutuk menyembunyikan _image_. + - Gunakan _image_ publik pada Docker hub. + - Tidak ada konfigurasi yang diperlukan. + - Pada GCE/Google Kubernetes Engine, sebuah _mirror_ lokal digunakan secara otomatis untuk meningkatkan kecepatan dan ketersediaan. +1. Klaster yang menjalankan _image proprietary_ yang seharusnya disembunyikan dari luar perusahaan, tetapi bisa terlihat oleh pengguna klaster. + - Gunakan sebuah privat [registri Docker](https://docs.docker.com/registry/) yang _hosted_. + - Bisa saja di-_host_ pada [Docker Hub](https://hub.docker.com/signup), atau lainnya. + - Konfigurasi `.docker/config.json` secara manual pada setiap Node seperti dijelaskan di atas. + - Atau, jalankan sebuah registri privat internal di belakang _firewall_ kamu dengan akses baca terbuka. + - Tidak ada konfigurasi Kubernetes yang diperlukan. + - Atau, ketika pada GCE/Google Kubernetes Engine, menggunakan Google Container Registry yang ada di proyek. + - Hal ini bisa bekerja baik dengan _autoscaling_ klaster dibandingkan konfigurasi Node manual. + - Atau, pada sebuah klaster dimana mengubah konfigurasi Node tidak nyaman, gunakan `imagePullSecrets`. +1. Klaster dengan _image proprietary_, beberapa memerlukan akses kontrol yang lebih ketat. + - Pastikan [AlwaysPullImages _admission controller_](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) aktif. Sebaliknya, semua Pod berpotensi memiliki akses ke semua _image_. + - Pindahkan data sensitif pada sumber daya "Secret", daripada mengemasnya menjadi sebuah _image_. +1. Sebuah klaster _multi-tenant_ dimana setiap _tenant_ memerlukan registri privatnya masing-masing. + - Pastikan [AlwaysPullImages _admission controller_](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) aktif. Sebaliknya, semua Pod dari semua tenant berpotensi memiliki akses pada semua _image_. + - Jalankan sebuah registri privat dimana otorisasi diperlukan. + - Men-_generate_ kredensial registri uuntuk setiap _tenant_, masukkan ke dalam _secret_ uuntuk setiap _namespace tenant_. + - _Tenant_ menambahkan _secret_ pada imagePullSecrets uuntuk setiap _namespace_. + + +Jika kamu memiliki akses pada beberapa registri, kamu dapat membuat satu _secret_ untuk setiap registri. +Kubelet akan melakukan _merge_ `imagePullSecrets` manapun menjadi sebuah virtual `.docker/config.json`. + +{{% /capture %}} diff --git a/content/id/docs/concepts/storage/persistent-volumes.md b/content/id/docs/concepts/storage/persistent-volumes.md index a6cf661653486..4063f1a282f23 100644 --- a/content/id/docs/concepts/storage/persistent-volumes.md +++ b/content/id/docs/concepts/storage/persistent-volumes.md @@ -296,6 +296,10 @@ spec: server: 172.17.0.2 ``` +{{< note >}} +Program pembantu yang berkaitan dengan tipe volume bisa saja diperlukan untuk mengonsumsi sebuah PersistentVolume di dalam klaster. Contoh ini menggunakan PersistentVolume dengan tipe NFS dan program pembantu /sbin/mount.nfs diperlukan untuk mendukung proses mounting sistem berkas (filesystem) NFS. +{{< /note >}} + ### Kapasitas Secara umum, sebuah PV akan memiliki kapasitas _storage_ tertentu. Hal ini ditentukan menggunakan atribut `capacity` pada PV. Lihat [Model Sumber Daya](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) Kubernetes untuk memahami satuan yang diharapkan pada atribut `capacity`. diff --git a/content/id/docs/reference/glossary/container-env-variables.md b/content/id/docs/reference/glossary/container-env-variables.md new file mode 100644 index 0000000000000..996475cc2fb6d --- /dev/null +++ b/content/id/docs/reference/glossary/container-env-variables.md @@ -0,0 +1,17 @@ +--- +title: Container Environment Variables +id: container-env-variables +date: 2019-06-24 +full_link: /docs/concepts/containers/container-environment-variables/ +short_description: > + Variabel environment kontainer merupakan pasangan name=value yang dapat digunakan untuk menyediakan informasi penting bagi kontainer yang dijalankan pada pod. + +aka: +tags: +- fundamental +--- + Container environment variables are name=value pairs that provide useful information into containers running in a Pod. + + + +Variabel environment kontainer merupakan pasangan name=value yang dapat digunakan untuk menyediakan informasi penting bagi {{< glossary_tooltip text="kontainer" term_id="container" >}} yang dijalankan pada pod. Contohnya, detail mengenai file systems, informasi mengenai kontainer itu sendiri, dan komponen kluster lainnya seperti endpoint kluster. diff --git a/content/id/docs/reference/glossary/container.md b/content/id/docs/reference/glossary/container.md new file mode 100644 index 0000000000000..df94eca12642a --- /dev/null +++ b/content/id/docs/reference/glossary/container.md @@ -0,0 +1,17 @@ +--- +title: Container +id: container +date: 2019-06-24 +full_link: /docs/concepts/overview/what-is-kubernetes/#why-containers +short_description: > + Sebuah image yang ringan dan *executable* yang mengandung perangkat lunak and segala *dependency* yang dibutuhkan. + +aka: +tags: +- fundamental +- workload +--- +Sebuah image yang ringan dan *executable* yang mengandung perangkat lunak and segala *dependency* yang dibutuhkan. + + +Kontainer memisahkan aplikasi dan segala infrastruktur yang digunakan untuk membuat sebuah deployment menjadi lebih mudah pada berbagai environment *cloud provider* yang ada. \ No newline at end of file diff --git a/content/id/docs/reference/glossary/image.md b/content/id/docs/reference/glossary/image.md new file mode 100644 index 0000000000000..bc26114e60bf4 --- /dev/null +++ b/content/id/docs/reference/glossary/image.md @@ -0,0 +1,18 @@ +--- +title: Image +id: image +date: 2019-04-24 +full_link: +short_description: > + Instans yang disimpan dari sebuah kontainer yang mengandung seperangkat perangkat lunak yang dibutuhkan untuk menjalankan sebuah aplikasi. + +aka: +tags: +- fundamental +--- + Instans yang disimpan dari sebuah kontainer yang mengandung seperangkat perangkat lunak yang dibutuhkan untuk menjalankan sebuah aplikasi. + + + +Sebuah mekanisme untuk mengemas perangkat lunak yang mengizinkan perangkat lunak tersebut untuk disimpan di dalam registri kontainer, di-_pull_ kedalam filesystem lokal, dan dijalankan sebagai suatu aplikasi. Meta data yang dimasukkan mengindikasikan _executable_ apa sajakah yang perlu dijalanmkan, siapa yang membuat _executable_ tersebut, dan informasi lainnya. + diff --git a/content/id/docs/reference/glossary/pod.md b/content/id/docs/reference/glossary/pod.md new file mode 100644 index 0000000000000..e98315217b1f7 --- /dev/null +++ b/content/id/docs/reference/glossary/pod.md @@ -0,0 +1,17 @@ +--- +title: Pod +id: pod +date: 2019-06-24 +full_link: /docs/concepts/workloads/pods/pod-overview/ +short_description: > + Unit Kubernetes yang paling sederhana dan kecil. Sebuah Pod merepresentasikan sebuah set kontainer yang dijalankan pada kluster kamu. + +aka: +tags: +- core-object +- fundamental +--- +Unit Kubernetes yang paling sederhana dan kecil. Sebuah Pod merepresentasikan sebuah set kontainer yang dijalankan {{< glossary_tooltip text="kontainer" term_id="container" >}} pada kluster kamu. + + +Sebuah Pod biasanya digunakan untuk menjalankan sebuah kontainer. Pod juga dapat digunakan untuk menjalankan beberapa sidecar container dan beberapa fiture tambahan. Pod biasanya diatur oleh sebuah {{< glossary_tooltip term_id="deployment" >}}. diff --git a/content/id/docs/reference/glossary/volume.md b/content/id/docs/reference/glossary/volume.md new file mode 100644 index 0000000000000..c247860d2c025 --- /dev/null +++ b/content/id/docs/reference/glossary/volume.md @@ -0,0 +1,17 @@ +--- +title: Volume +id: volume +date: 2019-04-24 +full_link: /docs/concepts/storage/volumes/ +short_description: > + Sebuah direktori yang mengandung data, dapat diakses o;eh kontainer-kontainer di dalam pod. + +aka: +tags: +- core-object +- fundamental +--- +Sebuah direktori yang mengandung data, dapat diakses oleh kontainer-kontainer di dalam {{< glossary_tooltip text="pod" term_id="pod" >}}. + + +Sebuah volume pada Kubernetes akan dianggap hidup selama {{< glossary_tooltip text="pod" term_id="pod" >}} dimana volume tersebut berada dalam kondisi hidup. Dengan demikian, sebuah volume yang hidup lebih lama dari {{< glossary_tooltip text="containers" term_id="container" >}} yang dijalankan pada {{< glossary_tooltip text="pod" term_id="pod" >}}, serta data volume tersebut disimpan pada {{< glossary_tooltip text="container" term_id="container" >}} akan di-restart. \ No newline at end of file diff --git a/content/ja/docs/concepts/_index.md b/content/ja/docs/concepts/_index.md index 51442fd391b81..a179d79113231 100644 --- a/content/ja/docs/concepts/_index.md +++ b/content/ja/docs/concepts/_index.md @@ -31,7 +31,7 @@ Kubernetesには、デプロイ済みのコンテナ化されたアプリケー 基本的なKubernetesのオブジェクトは次のとおりです。 * [Pod](/ja/docs/concepts/workloads/pods/pod-overview/) -* [Service](/docs/concepts/services-networking/service/) +* [Service](/ja/docs/concepts/services-networking/service/) * [Volume](/docs/concepts/storage/volumes/) * [Namespace](/ja/docs/concepts/overview/working-with-objects/namespaces/) diff --git a/content/ja/docs/concepts/architecture/nodes.md b/content/ja/docs/concepts/architecture/nodes.md index a35674840f49b..eac8388f412f9 100644 --- a/content/ja/docs/concepts/architecture/nodes.md +++ b/content/ja/docs/concepts/architecture/nodes.md @@ -6,7 +6,7 @@ weight: 10 {{% capture overview %}} -ノードは、以前には `ミニオン` としても知られていた、Kubernetesにおけるワーカーマシンです。1つのノードはクラスターの性質にもよりますが、1つのVMまたは物理的なマシンです。各ノードには[Pod](/docs/concepts/workloads/pods/pod/)を動かすために必要なサービスが含まれており、マスターコンポーネントによって管理されています。ノード上のサービスには[コンテナランタイム](/docs/concepts/overview/components/#node-components)、kubelet、kube-proxyが含まれています。詳細については、設計ドキュメントの[Kubernetes Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)セクションをご覧ください。 +ノードは、以前には `ミニオン` としても知られていた、Kubernetesにおけるワーカーマシンです。1つのノードはクラスターの性質にもよりますが、1つのVMまたは物理的なマシンです。各ノードには[Pod](/ja/docs/concepts/workloads/pods/pod/)を動かすために必要なサービスが含まれており、マスターコンポーネントによって管理されています。ノード上のサービスには[コンテナランタイム](/ja/docs/concepts/overview/components/#container-runtime)、kubelet、kube-proxyが含まれています。詳細については、設計ドキュメントの[Kubernetes Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)セクションをご覧ください。 {{% /capture %}} @@ -67,7 +67,7 @@ kubectl describe node <ノード名> Ready conditionが`pod-eviction-timeout`に設定された時間を超えても`Unknown`や`False`のままになっている場合、[kube-controller-manager](/docs/admin/kube-controller-manager/)に引数が渡され、該当ノード上にあるPodはノードコントローラーによって削除がスケジュールされます。デフォルトの退役のタイムアウトの時間は**5分**です。ノードが到達不能ないくつかの場合においては、APIサーバーが該当ノードのkubeletと疎通できない状態になっています。その場合、APIサーバーがkubeletと再び通信を確立するまでの間、Podの削除を行うことはできません。削除がスケジュールされるまでの間、削除対象のPodたちは切り離されたノードの上で稼働を続けることになります。 -バージョン1.5よりも前のKubernetesでは、ノードコントローラーはAPIサーバーから到達不能なそれらのPodを[強制削除](/docs/concepts/workloads/pods/pod/#force-deletion-of-pods)していました。しかしながら、1.5以降では、ノードコントローラーはクラスター内でPodが停止するのを確認するまでは強制的に削除しないようになりました。到達不能なノード上で動いているPodは`Terminating`または`Unknown`のステータスになります。Kubernetesが基盤となるインフラストラクチャーを推定できない場合、クラスター管理者は手動でNodeオブジェクトを削除する必要があります。KubernetesからNodeオブジェクトを削除すると、そのノードで実行されているすべてのPodオブジェクトがAPIサーバーから削除され、それらの名前が解放されます。 +バージョン1.5よりも前のKubernetesでは、ノードコントローラーはAPIサーバーから到達不能なそれらのPodを[強制削除](/ja/docs/concepts/workloads/pods/pod/#podの強制削除)していました。しかしながら、1.5以降では、ノードコントローラーはクラスター内でPodが停止するのを確認するまでは強制的に削除しないようになりました。到達不能なノード上で動いているPodは`Terminating`または`Unknown`のステータスになります。Kubernetesが基盤となるインフラストラクチャーを推定できない場合、クラスター管理者は手動でNodeオブジェクトを削除する必要があります。KubernetesからNodeオブジェクトを削除すると、そのノードで実行されているすべてのPodオブジェクトがAPIサーバーから削除され、それらの名前が解放されます。 バージョン1.12において、`TaintNodesByCondition`機能がBetaに昇格し、それによってノードのライフサイクルコントローラーがconditionを表した[taint](/docs/concepts/configuration/taint-and-toleration/)を自動的に生成するようになりました。 同様に、スケジューラーがPodを配置するノードを検討する際、ノードのtaintとPodのtolerationsを見るかわりにconditionを無視するようになりました。 @@ -98,7 +98,7 @@ CapacityとAllocatableについて深く知りたい場合は、ノード上で ## 管理 {#management} -[Pod](/docs/concepts/workloads/pods/pod/)や[Service](/docs/concepts/services-networking/service/)と違い、ノードは本質的にはKubernetesによって作成されません。GCPのようなクラウドプロバイダーによって外的に作成されるか、VMや物理マシンのプールに存在するものです。そのため、Kubernetesがノードを作成すると、そのノードを表すオブジェクトが作成されます。作成後、Kubernetesはそのノードが有効かどうかを確認します。 たとえば、次の内容からノードを作成しようとしたとします: +[Pod](/ja/docs/concepts/workloads/pods/pod/)や[Service](/ja/docs/concepts/services-networking/service/)と違い、ノードは本質的にはKubernetesによって作成されません。GCPのようなクラウドプロバイダーによって外的に作成されるか、VMや物理マシンのプールに存在するものです。そのため、Kubernetesがノードを作成すると、そのノードを表すオブジェクトが作成されます。作成後、Kubernetesはそのノードが有効かどうかを確認します。 たとえば、次の内容からノードを作成しようとしたとします: ```json { @@ -209,7 +209,7 @@ DaemonSetコントローラーによって作成されたPodはKubernetesスケ Kubernetesスケジューラーは、ノード上のすべてのPodに十分なリソースがあることを確認します。 ノード上のコンテナが要求するリソースの合計がノードキャパシティ以下であることを確認します。 -これは、kubeletによって開始されたすべてのコンテナを含みますが、[コンテナランタイム](/docs/concepts/overview/components/#node-components)によって直接開始されたコンテナやコンテナの外で実行されているプロセスは含みません。 +これは、kubeletによって開始されたすべてのコンテナを含みますが、[コンテナランタイム](/ja/docs/concepts/overview/components/#container-runtime)によって直接開始されたコンテナやコンテナの外で実行されているプロセスは含みません。 Pod以外のプロセス用にリソースを明示的に予約したい場合は、このチュートリアルに従って[Systemデーモン用にリソースを予約](/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved)してください。 diff --git a/content/ja/docs/concepts/cluster-administration/cluster-administration-overview.md b/content/ja/docs/concepts/cluster-administration/cluster-administration-overview.md new file mode 100644 index 0000000000000..935edba7a3614 --- /dev/null +++ b/content/ja/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -0,0 +1,69 @@ +--- +reviewers: +title: クラスター管理の概要 +content_template: templates/concept +weight: 10 +--- + +{{% capture overview %}} +このページはKubernetesクラスターの作成や管理者向けの内容です。Kubernetesのコア[コンセプト](/ja/docs/concepts/)についてある程度精通していることを前提とします。 +{{% /capture %}} + +{{% capture body %}} +## クラスターのプランニング + +Kubernetesクラスターの計画、セットアップ、設定の例を知るには[設定](/ja/docs/setup/)のガイドを参照してください。この記事で列挙されているソリューションは*ディストリビューション* と呼ばれます。 + +ガイドを選択する前に、いくつかの考慮事項を挙げます。 + + - ユーザーのコンピューター上でKubernetesを試したいでしょうか、それとも高可用性のあるマルチノードクラスターを構築したいでしょうか? あなたのニーズにあったディストリビューションを選択してください。 + - **もしあなたが高可用性を求める場合**、 [複数ゾーンにまたがるクラスター](/docs/concepts/cluster-administration/federation/)の設定について学んでください。 + - [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/)のような**ホストされているKubernetesクラスター**を使用するのか、それとも**自分自身でクラスターをホストするのでしょうか**? + - 使用するクラスターは**オンプレミス**なのか、それとも**クラウド (IaaS)**でしょうか? Kubernetesはハイブリッドクラスターを直接サポートしていません。その代わりユーザーは複数のクラスターをセットアップできます。 + - Kubernetesを**"ベアメタル"なハードウェア** 上で稼働させるますか? それとも**仮想マシン (VMs)** 上で稼働させますか? + - **もしオンプレミスでKubernetesを構築する場合**、どの[ネットワークモデル](/ja/docs/concepts/cluster-administration/networking/)が最適か検討してください。 + - **ただクラスターを稼働させたいだけ**でしょうか、それとも**Kubernetesプロジェクトのコードの開発**を行いたいでしょうか? もし後者の場合、開発が進行中のディストリビューションを選択してください。いくつかのディストリビューションはバイナリリリースのみ使用していますが、多くの選択肢があります。 + - クラスターを稼働させるのに必要な[コンポーネント](/ja/docs/concepts/overview/components/)についてよく理解してください。 + +注意: 全てのディストリビューションがアクティブにメンテナンスされている訳ではありません。最新バージョンのKubernetesでテストされたディストリビューションを選択してください。 + +## クラスターの管理 + +* [クラスターの管理](/docs/tasks/administer-cluster/cluster-management/)では、クラスターのライフサイクルに関するいくつかのトピックを紹介しています。例えば、新規クラスターの作成、クラスターのマスターやワーカーノードのアップグレード、ノードのメンテナンスの実施(例: カーネルのアップグレード)、稼働中のクラスターのKubernetes APIバージョンのアップグレードについてです。 + +* [ノードの管理](/ja/docs/concepts/architecture/nodes/)方法について学んでください。 + +* 共有クラスターにおける[リソースクォータ](/docs/concepts/policy/resource-quotas/)のセットアップと管理方法について学んでください。 + +## クラスターをセキュアにする + +* [Certificates](/docs/concepts/cluster-administration/certificates/)では、異なるツールチェインを使用して証明書を作成する方法を説明します。 + +* [Kubernetes コンテナの環境](/ja/docs/concepts/containers/container-environment-variables/)では、Kubernetesノード上でのKubeletが管理するコンテナの環境について説明します。 + +* [Kubernetes APIへのアクセス制御](/docs/reference/access-authn-authz/controlling-access/)では、ユーザーとサービスアカウントの権限の設定方法について説明します。 + +* [認証](/docs/reference/access-authn-authz/authentication/)では、様々な認証オプションを含むKubernetesでの認証について説明します。 + +* [認可](/docs/reference/access-authn-authz/authorization/)では、認証とは別に、HTTPリクエストの処理方法を制御します。 + +* [アドミッションコントローラーの使用](/docs/reference/access-authn-authz/admission-controllers/)では、認証と認可の後にKubernetes APIに対するリクエストをインターセプトするプラグインについて説明します。 + +* [Kubernetesクラスターでのsysctlの使用](/docs/concepts/cluster-administration/sysctl-cluster/)では、管理者向けにカーネルパラメーターを設定するため`sysctl`コマンドラインツールの使用方法について解説します。 + +* [クラスターの監査](/docs/tasks/debug-application-cluster/audit/)では、Kubernetesの監査ログの扱い方について解説します。 + +### kubeletをセキュアにする + * [マスターとノードのコミュニケーション](/ja/docs/concepts/architecture/master-node-communication/) + * [TLSのブートストラップ](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) + * [Kubeletの認証/認可](/docs/admin/kubelet-authentication-authorization/) + +## オプションのクラスターサービス + +* [DNSのインテグレーション](/ja/docs/concepts/services-networking/dns-pod-service/)では、DNS名をKubernetes Serviceに直接名前解決する方法を解説します。 + +* [クラスターアクティビィのロギングと監視](/docs/concepts/cluster-administration/logging/)では、Kubernetesにおけるロギングがどのように行われ、どう実装されているかについて解説します。 + +{{% /capture %}} + + diff --git a/content/ja/docs/concepts/cluster-administration/networking.md b/content/ja/docs/concepts/cluster-administration/networking.md new file mode 100644 index 0000000000000..80cf72f0eedc6 --- /dev/null +++ b/content/ja/docs/concepts/cluster-administration/networking.md @@ -0,0 +1,291 @@ +--- +title: クラスターのネットワーク +content_template: templates/concept +weight: 50 +--- + +{{% capture overview %}} + +ネットワークはKubernetesにおける中心的な部分ですが、どのように動作するかを正確に理解することは難解な場合もあります。 +Kubernetesには、4つの異なる対応すべきネットワークの問題があります: + +1. 高度に結合されたコンテナ間の通信: これは、[Pod](/ja/docs/concepts/workloads/pods/pod/)および`localhost`通信によって解決されます。 +2. Pod間の通信: 本ドキュメントの主な焦点です。 +3. Podからサービスへの通信:これは[Service](/ja/docs/concepts/services-networking/service/)でカバーされています。 +4. 外部からサービスへの通信:これは[Service](/ja/docs/concepts/services-networking/service/)でカバーされています。 + +{{% /capture %}} + + +{{% capture body %}} + +Kubernetesは、言ってしまえばアプリケーション間でマシンを共有するためのものです。通常、マシンを共有するには、2つのアプリケーションが同じポートを使用しないようにする必要があります。 +複数の開発者間でポートを調整することは、大規模に行うことは非常に難しく、ユーザーが制御できないクラスターレベルの問題に見合うことがあります。 + +動的ポート割り当てはシステムに多くの複雑さをもたらします。すべてのアプリケーションはパラメータとしてポートを管理する必要があり、APIサーバーにて動的なポート番号を設定値として注入する方法が必要となり、各サービスはお互いにお互いを見つける方法が必要です。Kubernetesはこれに対処するのではなく、別のアプローチを取ります。 + +## Kubernetesのネットワークモデル + +すべての`Pod`は独自のIPアドレスを持ちます。これは、`Pod`間のリンクを明示的に作成する必要がなく、コンテナポートをホストポートにマッピングする必要がほとんどないことを意味します。こうすることで、ポート割り当て、名前解決、サービスディスカバリー、負荷分散、アプリケーション設定、および移行の観点から、`Pod`をVMまたは物理ホストと同様に扱うことができる、クリーンで後方互換性のあるモデルを生み出しています。 + +Kubernetesは、ネットワークの実装に次の基本的な要件を課しています(意図的なネットワークセグメンテーションポリシーを除きます): + + * ノード上のPodが、NATなしですべてのノード上のすべてのPodと通信できること + * systemdやkubeletなどノード上にあるエージェントが、そのノード上のすべてのPodと通信できること + +注: ホストネットワークで実行される`Pod`をサポートするプラットフォームの場合(Linuxなど): + + * ノードのホストネットワーク内のPodは、NATなしですべてのノード上のすべてのPodと通信できます + +このモデルは全体としてそれほど複雑ではないことに加え、KubernetesがVMからコンテナへのアプリへの移植を簡単にするという要望と基本的に互換性があります。ジョブがVMで実行されていた頃も、VMにはIPがあってプロジェクト内の他のVMと通信できました。これは同じ基本モデルです。 + +KubernetesのIPアドレスは`Pod`スコープに存在します。`Pod`内のコンテナは、IPアドレスを含むネットワーク名前空間を共有します。これは、`Pod`内のコンテナがすべて`localhost`上の互いのポートに到達できることを意味します。また、`Pod`内のコンテナがポートの使用を調整する必要があることも意味しますが、これもVM内のプロセスと同じです。これのことを「IP-per-pod(Pod毎のIP)」モデルと呼びます。 + +この実装方法は実際に使われているコンテナランタイムの詳細部分です。 + +`Pod`に転送する`ノード`自体のポート(ホストポートと呼ばれる)を要求することは可能ですが、これは非常にニッチな操作です。このポート転送の実装方法も、コンテナランタイムの詳細部分です。`Pod`自体は、ホストポートの有無を認識しません。 + +## Kubernetesネットワークモデルの実装方法 + +このネットワークモデルを実装する方法はいくつかあります。このドキュメントは、こうした方法を網羅的にはカバーしませんが、いくつかの技術の紹介として、また出発点として役立つことを願っています。 + +この一覧はアルファベット順にソートされており、順序は優先ステータスを意味するものではありません。 + +### ACI + +[Cisco Application Centric Infrastructure](https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html) offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers. +[ACI](https://www.github.com/noironetworks/aci-containers) provides container networking integration for ACI. +An overview of the integration is provided [here](https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf). + +### Antrea + +Project [Antrea](https://github.com/vmware-tanzu/antrea) is an opensource Kubernetes networking solution intended to be Kubernetes native. +It leverages Open vSwitch as the networking data plane. +Open vSwitch is a high-performance programmable virtual switch that supports both Linux and Windows. +Open vSwitch enables Antrea to implement Kubernetes Network Policies in a high-performance and efficient manner. +Thanks to the "programmable" characteristic of Open vSwitch, Antrea is able to implement an extensive set of networking and security features and services on top of Open vSwitch. + +### AOS from Apstra + +[AOS](http://www.apstra.com/products/aos/) is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform. AOS leverages a highly scalable distributed design to eliminate network outages while minimizing costs. + +The AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems. These Layer-3 hosts can be Linux servers (Debian, Ubuntu, CentOS) that create BGP neighbor relationships directly with the top of rack switches (TORs). AOS automates the routing adjacencies and then provides fine grained control over the route health injections (RHI) that are common in a Kubernetes deployment. + +AOS has a rich set of REST API endpoints that enable Kubernetes to quickly change the network policy based on application requirements. Further enhancements will integrate the AOS Graph model used for the network design with the workload provisioning, enabling an end to end management system for both private and public clouds. + +AOS supports the use of common vendor equipment from manufacturers including Cisco, Arista, Dell, Mellanox, HPE, and a large number of white-box systems and open network operating systems like Microsoft SONiC, Dell OPX, and Cumulus Linux. + +Details on how the AOS system works can be accessed here: http://www.apstra.com/products/how-it-works/ + +### AWS VPC CNI for Kubernetes + +[AWS VPC CNI](https://github.com/aws/amazon-vpc-cni-k8s)は、Kubernetesクラスター向けの統合されたAWS Virtual Private Cloud(VPC)ネットワーキングを提供します。このCNIプラグインは、高いスループットと可用性、低遅延、および最小のネットワークジッタを提供します。さらに、ユーザーは、Kubernetesクラスターを構築するための既存のAWS VPCネットワーキングとセキュリティのベストプラクティスを適用できます。これには、ネットワークトラフィックの分離にVPCフローログ、VPCルーティングポリシー、およびセキュリティグループを使用する機能が含まれます。 + +このCNIプラグインを使用すると、Kubernetes PodはVPCネットワーク上と同じIPアドレスをPod内に持つことができます。CNIはAWS Elastic Networking Interfaces(ENI)を各Kubernetesノードに割り当て、ノード上のPodに各ENIのセカンダリIP範囲を使用します。このCNIには、Podの起動時間を短縮するためのENIとIPアドレスの事前割り当ての制御が含まれており、最大2,000ノードの大規模クラスターが可能です。 + +さらに、このCNIは[ネットワークポリシーの適用のためにCalico](https://docs.aws.amazon.com/ja_jp/eks/latest/userguide/calico.html)と一緒に実行できます。AWS VPC CNIプロジェクトは、[GitHubのドキュメント](https://github.com/aws/amazon-vpc-cni-k8s)とともにオープンソースで公開されています。 + +### Azure CNI for Kubernetes +[Azure CNI](https://docs.microsoft.com/en-us/azure/virtual-network/container-networking-overview) is an [open source](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) plugin that integrates Kubernetes Pods with an Azure Virtual Network (also known as VNet) providing network performance at par with VMs. Pods can connect to peered VNet and to on-premises over Express Route or site-to-site VPN and are also directly reachable from these networks. Pods can access Azure services, such as storage and SQL, that are protected by Service Endpoints or Private Link. You can use VNet security policies and routing to filter Pod traffic. The plugin assigns VNet IPs to Pods by utilizing a pool of secondary IPs pre-configured on the Network Interface of a Kubernetes node. + +Azure CNI is available natively in the [Azure Kubernetes Service (AKS)] (https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni). + + +### Big Cloud Fabric from Big Switch Networks + +[Big Cloud Fabric](https://www.bigswitch.com/container-network-automation) is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments. Using unified physical & virtual SDN, Big Cloud Fabric tackles inherent container networking problems such as load balancing, visibility, troubleshooting, security policies & container traffic monitoring. + +With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, container orchestration systems such as Kubernetes, RedHat OpenShift, Mesosphere DC/OS & Docker Swarm will be natively integrated alongside with VM orchestration systems such as VMware, OpenStack & Nutanix. Customers will be able to securely inter-connect any number of these clusters and enable inter-tenant communication between them if needed. + +BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](http://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/). + +### Cilium + +[Cilium](https://github.com/cilium/cilium) is open source software for +providing and transparently securing network connectivity between application +containers. Cilium is L7/HTTP aware and can enforce network policies on L3-L7 +using an identity based security model that is decoupled from network +addressing, and it can be used in combination with other CNI plugins. + +### CNI-Genie from Huawei + +[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](https://github.com/kubernetes/website/blob/master/content/en/docs/concepts/cluster-administration/networking.md#the-kubernetes-network-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](http://docs.projectcalico.org/), [Romana](http://romana.io), [Weave-net](https://www.weave.works/products/weave-net/). + +CNI-Genie also supports [assigning multiple IP addresses to a pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod), each from a different CNI plugin. + +### cni-ipvlan-vpc-k8s +[cni-ipvlan-vpc-k8s](https://github.com/lyft/cni-ipvlan-vpc-k8s) contains a set +of CNI and IPAM plugins to provide a simple, host-local, low latency, high +throughput, and compliant networking stack for Kubernetes within Amazon Virtual +Private Cloud (VPC) environments by making use of Amazon Elastic Network +Interfaces (ENI) and binding AWS-managed IPs into Pods using the Linux kernel's +IPvlan driver in L2 mode. + +The plugins are designed to be straightforward to configure and deploy within a +VPC. Kubelets boot and then self-configure and scale their IP usage as needed +without requiring the often recommended complexities of administering overlay +networks, BGP, disabling source/destination checks, or adjusting VPC route +tables to provide per-instance subnets to each host (which is limited to 50-100 +entries per VPC). In short, cni-ipvlan-vpc-k8s significantly reduces the +network complexity required to deploy Kubernetes at scale within AWS. + +### Contiv + +[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](http://contiv.io) is all open sourced. + +### Contrail / Tungsten Fabric + +[Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is a truly open, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with various orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide different isolation modes for virtual machines, containers/pods and bare metal workloads. + +### DANM + +[DANM](https://github.com/nokia/danm) is a networking solution for telco workloads running in a Kubernetes cluster. It's built up from the following components: + + * A CNI plugin capable of provisioning IPVLAN interfaces with advanced features + * An in-built IPAM module with the capability of managing multiple, cluster-wide, discontinuous L3 networks and provide a dynamic, static, or no IP allocation scheme on-demand + * A CNI metaplugin capable of attaching multiple network interfaces to a container, either through its own CNI, or through delegating the job to any of the popular CNI solution like SRI-OV, or Flannel in parallel + * A Kubernetes controller capable of centrally managing both VxLAN and VLAN interfaces of all Kubernetes hosts + * Another Kubernetes controller extending Kubernetes' Service-based service discovery concept to work over all network interfaces of a Pod + +With this toolset DANM is able to provide multiple separated network interfaces, the possibility to use different networking back ends and advanced IPAM features for the pods. + +### Flannel + +[Flannel](https://github.com/coreos/flannel#flannel) is a very simple overlay +network that satisfies the Kubernetes requirements. Many +people have reported success with Flannel and Kubernetes. + +### Google Compute Engine (GCE) + +For the Google Compute Engine cluster configuration scripts, [advanced +routing](https://cloud.google.com/vpc/docs/routes) is used to +assign each VM a subnet (default is `/24` - 254 IPs). Any traffic bound for that +subnet will be routed directly to the VM by the GCE network fabric. This is in +addition to the "main" IP address assigned to the VM, which is NAT'ed for +outbound internet access. A linux bridge (called `cbr0`) is configured to exist +on that subnet, and is passed to docker's `--bridge` flag. + +Docker is started with: + +```shell +DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false" +``` + +This bridge is created by Kubelet (controlled by the `--network-plugin=kubenet` +flag) according to the `Node`'s `.spec.podCIDR`. + +Docker will now allocate IPs from the `cbr-cidr` block. Containers can reach +each other and `Nodes` over the `cbr0` bridge. Those IPs are all routable +within the GCE project network. + +GCE itself does not know anything about these IPs, though, so it will not NAT +them for outbound internet traffic. To achieve that an iptables rule is used +to masquerade (aka SNAT - to make it seem as if packets came from the `Node` +itself) traffic that is bound for IPs outside the GCE project network +(10.0.0.0/8). + +```shell +iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE +``` + +Lastly IP forwarding is enabled in the kernel (so the kernel will process +packets for bridged containers): + +```shell +sysctl net.ipv4.ip_forward=1 +``` + +The result of all this is that all `Pods` can reach each other and can egress +traffic to the internet. + +### Jaguar + +[Jaguar](https://gitlab.com/sdnlab/jaguar) is an open source solution for Kubernetes's network based on OpenDaylight. Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod. + +### k-vswitch + +[k-vswitch](https://github.com/k-vswitch/k-vswitch) is a simple Kubernetes networking plugin based on [Open vSwitch](https://www.openvswitch.org/). It leverages existing functionality in Open vSwitch to provide a robust networking plugin that is easy-to-operate, performant and secure. + +### Knitter + +[Knitter](https://github.com/ZTE/Knitter/) is a network solution which supports multiple networking in Kubernetes. It provides the ability of tenant management and network management. Knitter includes a set of end-to-end NFV container networking solutions besides multiple network planes, such as keeping IP address for applications, IP address migration, etc. + +### Kube-OVN + +[Kube-OVN](https://github.com/alauda/kube-ovn) is an OVN-based kubernetes network fabric for enterprises. With the help of OVN/OVS, it provides some advanced overlay network features like subnet, QoS, static IP allocation, traffic mirroring, gateway, openflow-based network policy and service proxy. + +### Kube-router + +[Kube-router](https://github.com/cloudnativelabs/kube-router) is a purpose-built networking solution for Kubernetes that aims to provide high performance and operational simplicity. Kube-router provides a Linux [LVS/IPVS](http://www.linuxvirtualserver.org/software/ipvs.html)-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer. + +### L2 networks and linux bridging + +If you have a "dumb" L2 network, such as a simple switch in a "bare-metal" +environment, you should be able to do something similar to the above GCE setup. +Note that these instructions have only been tried very casually - it seems to +work, but has not been thoroughly tested. If you use this technique and +perfect the process, please let us know. + +Follow the "With Linux Bridge devices" section of [this very nice +tutorial](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from +Lars Kellogg-Stedman. + +### Multus (a Multi Network plugin) + +[Multus](https://github.com/Intel-Corp/multus-cni) is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes. + +Multus supports all [reference plugins](https://github.com/containernetworking/plugins) (eg. [Flannel](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel), [DHCP](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp), [Macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan)) that implement the CNI specification and 3rd party plugins (eg. [Calico](https://github.com/projectcalico/cni-plugin), [Weave](https://github.com/weaveworks/weave), [Cilium](https://github.com/cilium/cilium), [Contiv](https://github.com/contiv/netplugin)). In addition to it, Multus supports [SRIOV](https://github.com/hustcat/sriov-cni), [DPDK](https://github.com/Intel-Corp/sriov-cni), [OVS-DPDK & VPP](https://github.com/intel/vhost-user-net-plugin) workloads in Kubernetes with both cloud native and NFV based applications in Kubernetes. + +### NSX-T + +[VMware NSX-T](https://docs.vmware.com/en/VMware-NSX-T/index.html) is a network virtualization and security platform. NSX-T can provide network virtualization for a multi-cloud and multi-hypervisor environment and is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks. In addition to vSphere hypervisors, these environments include other hypervisors such as KVM, containers, and bare metal. + +[NSX-T Container Plug-in (NCP)](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) provides integration between NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift. + +### Nuage Networks VCS (Virtualized Cloud Services) + +[Nuage](http://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards. + +The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage's policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform's real-time analytics engine enables visibility and security monitoring for Kubernetes applications. + +### OpenVSwitch + +[OpenVSwitch](https://www.openvswitch.org/) is a somewhat more mature but also +complicated way to build an overlay network. This is endorsed by several of the +"Big Shops" for networking. + +### OVN (Open Virtual Networking) + +OVN is an opensource network virtualization solution developed by the +Open vSwitch community. It lets one create logical switches, logical routers, +stateful ACLs, load-balancers etc to build different virtual networking +topologies. The project has a specific Kubernetes plugin and documentation +at [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes). + +### Project Calico + +[Project Calico](http://docs.projectcalico.org/) is an open source container networking provider and network policy engine. + +Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet, for both Linux (open source) and Windows (proprietary - available from [Tigera](https://www.tigera.io/essentials/)). Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall. + +Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka [canal](https://github.com/tigera/canal), or native GCE, AWS or Azure networking. + +### Romana + +[Romana](http://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/concepts/services-networking/network-policies/) to provide isolation across network namespaces. + +### Weave Net from Weaveworks + +[Weave Net](https://www.weave.works/products/weave-net/) is a +resilient and simple to use network for Kubernetes and its hosted applications. +Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-plugin/) +or stand-alone. In either version, it doesn't require any configuration or extra code +to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes. + +{{% /capture %}} + +{{% capture whatsnext %}} + +ネットワークモデルの初期設計とその根拠、および将来の計画については、[ネットワーク設計ドキュメント](https://git.k8s.io/community/contributors/design-proposals/network/networking.md)で詳細に説明されています。 + +{{% /capture %}} diff --git a/content/ja/docs/concepts/configuration/assign-pod-node.md b/content/ja/docs/concepts/configuration/assign-pod-node.md index 3685687cd20dc..0dbde41861092 100644 --- a/content/ja/docs/concepts/configuration/assign-pod-node.md +++ b/content/ja/docs/concepts/configuration/assign-pod-node.md @@ -7,7 +7,7 @@ weight: 30 {{% capture overview %}} -[Pod](/docs/concepts/workloads/pods/pod/)が稼働する[Node](/docs/concepts/architecture/nodes/)を特定のものに指定したり、優先条件を指定して制限することができます。 +[Pod](/ja/docs/concepts/workloads/pods/pod/)が稼働する[Node](/ja/docs/concepts/architecture/nodes/)を特定のものに指定したり、優先条件を指定して制限することができます。 これを実現するためにはいくつかの方法がありますが、推奨されている方法は[ラベルでの選択](/docs/concepts/overview/working-with-objects/labels/)です。 スケジューラーが最適な配置を選択するため、一般的にはこのような制限は不要です(例えば、複数のPodを別々のNodeへデプロイしたり、Podを配置する際にリソースが不十分なNodeにはデプロイされないことが挙げられます)が、 SSDが搭載されているNodeにPodをデプロイしたり、同じアベイラビリティーゾーン内で通信する異なるサービスのPodを同じNodeにデプロイする等、柔軟な制御が必要なこともあります。 @@ -83,7 +83,7 @@ nodeSelectorを以下のように追加します: ## Nodeの隔離や制限 Nodeにラベルを付与することで、Podは特定のNodeやNodeグループにスケジュールされます。 -これにより、特定のPodを、確かな隔離性や安全性、特性を持ったNodeで稼働させることができます。 +これにより、特定のPodを、確かな隔離性や安全性、特性を持ったNodeで稼働させることができます。 この目的でラベルを使用する際に、Node上のkubeletプロセスに上書きされないラベルキーを選択することが強く推奨されています。 これは、安全性が損なわれたNodeがkubeletの認証情報をNodeのオブジェクトに設定したり、スケジューラーがそのようなNodeにデプロイすることを防ぎます。 @@ -95,7 +95,7 @@ Nodeの隔離にラベルのプレフィックスを使用するためには、 3. Nodeに`node-restriction.kubernetes.io/` プレフィックスのラベルを付与し、そのラベルがnode selectorに指定されていること。 例えば、`example.com.node-restriction.kubernetes.io/fips=true` または `example.com.node-restriction.kubernetes.io/pci-dss=true`のようなラベルです。 -## Affinity と Anti-Affinity +## Affinity と Anti-Affinity {#affinity-and-anti-affinity} `nodeSelector`はPodの稼働を特定のラベルが付与されたNodeに制限する最も簡単な方法です。 Affinity/Anti-Affinityでは、より柔軟な指定方法が提供されています。 @@ -296,7 +296,7 @@ spec: topologyKey: "kubernetes.io/hostname" containers: - name: web-app - image: nginx:1.12-alpine + image: nginx:1.16-alpine ``` 上記2つのDeploymentが生成されると、3つのノードは以下のようになります。 diff --git a/content/ja/docs/concepts/configuration/overview.md b/content/ja/docs/concepts/configuration/overview.md index 7902ce382523b..8255db692ab1d 100644 --- a/content/ja/docs/concepts/configuration/overview.md +++ b/content/ja/docs/concepts/configuration/overview.md @@ -29,13 +29,13 @@ weight: 10 ## "真っ裸"のPod に対する ReplicaSet、Deployment、およびJob -- 可能な限り、"真っ裸"のPod([ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)や[Deployment](/docs/concepts/workloads/controllers/deployment/)にバインドされていないPod)は使わないでください。Nodeに障害が発生した場合、これらのPodは再スケジュールされません。 +- 可能な限り、"真っ裸"のPod([ReplicaSet](/ja/docs/concepts/workloads/controllers/replicaset/)や[Deployment](/ja/docs/concepts/workloads/controllers/deployment/)にバインドされていないPod)は使わないでください。Nodeに障害が発生した場合、これらのPodは再スケジュールされません。 明示的に[`restartPolicy: Never`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)を使いたいシーンを除いて、DeploymentはPodを直接作成するよりもほとんど常に望ましい方法です。Deploymentには、希望する数のPodが常に使用可能であることを確認するためにReplicaSetを作成したり、Podを置き換えるための戦略(RollingUpdateなど)を指定したりできます。[Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/)のほうが適切な場合もあるかもしれません。 ## Service -- 対応するバックエンドワークロード(DeploymentまたはReplicaSet)の前、およびそれにアクセスする必要があるワークロードの前に[Service](/docs/concepts/services-networking/service/)を作成します。Kubernetesがコンテナを起動すると、コンテナ起動時に実行されていたすべてのServiceを指す環境変数が提供されます。たとえば、fooという名前のServiceが存在する場合、すべてのコンテナは初期環境で次の変数を取得します。 +- 対応するバックエンドワークロード(DeploymentまたはReplicaSet)の前、およびそれにアクセスする必要があるワークロードの前に[Service](/ja/docs/concepts/services-networking/service/)を作成します。Kubernetesがコンテナを起動すると、コンテナ起動時に実行されていたすべてのServiceを指す環境変数が提供されます。たとえば、fooという名前のServiceが存在する場合、すべてのコンテナは初期環境で次の変数を取得します。 ```shell FOO_SERVICE_HOST= @@ -50,18 +50,17 @@ weight: 10 デバッグ目的でのみポートにアクセスする必要がある場合は、[apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls)または[`kubectl port-forward`](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)を使用できます。 - ノード上でPodのポートを明示的に公開する必要がある場合は、hostPortに頼る前に[NodePort](/docs/concepts/services-networking/service/#nodeport)の使用を検討してください。 + ノード上でPodのポートを明示的に公開する必要がある場合は、hostPortに頼る前に[NodePort](/ja/docs/concepts/services-networking/service/#nodeport)の使用を検討してください。 - `hostPort`の理由と同じくして、`hostNetwork`の使用はできるだけ避けてください。 -- `kube-proxy`のロードバランシングが不要な場合は、[headless Service](/docs/concepts/services-networking/service/#headless- -services)(`ClusterIP`が`None`)を使用してServiceを簡単に検出できます。 +- `kube-proxy`のロードバランシングが不要な場合は、[headless Service](/ja/docs/concepts/services-networking/service/#headless-service)(`ClusterIP`が`None`)を使用してServiceを簡単に検出できます。 ## ラベルの使用 - `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`のように、アプリケーションまたはデプロイメントの__セマンティック属性__を識別する[ラベル](/docs/concepts/overview/working-with-objects/labels/)を定義して使いましょう。これらのラベルを使用して、他のリソースに適切なポッドを選択できます。例えば、すべての`tier:frontend`を持つPodを選択するServiceや、`app:myapp`に属するすべての`phase:test`コンポーネント、などです。このアプローチの例を知るには、[ゲストブック](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)アプリも合わせてご覧ください。 -セレクターからリリース固有のラベルを省略することで、Serviceを複数のDeploymentにまたがるように作成できます。 [Deployment](/docs/concepts/workloads/controllers/deployment/)により、ダウンタイムなしで実行中のサービスを簡単に更新できます。 +セレクターからリリース固有のラベルを省略することで、Serviceを複数のDeploymentにまたがるように作成できます。 [Deployment](/ja/docs/concepts/workloads/controllers/deployment/)により、ダウンタイムなしで実行中のサービスを簡単に更新できます。 オブジェクトの望ましい状態はDeploymentによって記述され、その仕様への変更が_適用_されると、Deploymentコントローラは制御された速度で実際の状態を望ましい状態に変更します。 diff --git a/content/ja/docs/concepts/containers/container-lifecycle-hooks.md b/content/ja/docs/concepts/containers/container-lifecycle-hooks.md index 4eb737ff159c9..943e77aae28f6 100644 --- a/content/ja/docs/concepts/containers/container-lifecycle-hooks.md +++ b/content/ja/docs/concepts/containers/container-lifecycle-hooks.md @@ -34,7 +34,7 @@ Angularなどのコンポーネントライフサイクルフックを持つ多 これはブロッキング、つまり同期的であるため、コンテナを削除するための呼び出しを送信する前に完了する必要があります。 ハンドラーにパラメーターは渡されません。 -終了動作の詳細な説明は、[Termination of Pods](/docs/concepts/workloads/pods/pod/#termination-of-pods)にあります。 +終了動作の詳細な説明は、[Termination of Pods](/ja/docs/concepts/workloads/pods/pod/#podの終了)にあります。 ### フックハンドラーの実装 diff --git a/content/ja/docs/concepts/containers/runtime-class.md b/content/ja/docs/concepts/containers/runtime-class.md index 8d9c72aca235c..1acbdcf21994a 100644 --- a/content/ja/docs/concepts/containers/runtime-class.md +++ b/content/ja/docs/concepts/containers/runtime-class.md @@ -40,7 +40,7 @@ RuntimeClassを通じて利用可能な設定はContainer Runtime Interface (CRI {{< note >}} RuntimeClassは現時点において、クラスター全体で同じ種類のNode設定であることを仮定しています。(これは全てのNodeがコンテナランタイムに関して同じ方法で構成されていることを意味します)。 -設定が異なるNodeに関しては、スケジューリング機能を通じてRuntimeClassとは独立して管理されなくてはなりません。([PodをNodeに割り当てる方法](/docs/concepts/configuration/assign-pod-node/)を参照して下さい)。 +設定が異なるNodeに関しては、スケジューリング機能を通じてRuntimeClassとは独立して管理されなくてはなりません。([PodをNodeに割り当てる方法](/ja/docs/concepts/configuration/assign-pod-node/)を参照して下さい)。 {{< /note >}} RuntimeClassの設定は、RuntimeClassによって参照される`ハンドラー`名を持ちます。そのハンドラーは正式なDNS-1123に準拠する形式のラベルでなくてはなりません(英数字 + `-`の文字で構成されます)。 diff --git a/content/ja/docs/concepts/overview/components.md b/content/ja/docs/concepts/overview/components.md index 3824bde0c6d8c..5a4f894c44303 100644 --- a/content/ja/docs/concepts/overview/components.md +++ b/content/ja/docs/concepts/overview/components.md @@ -24,7 +24,7 @@ Kubernetesをデプロイすると、クラスターが展開されます。 ## マスターコンポーネント マスターコンポーネントは、クラスターのコントロールプレーンを提供します。 -マスターコンポーネントは、クラスターに関する全体的な決定(スケジューリングなど)を行います。また、クラスターイベントの検出および応答を行います(たとえば、deploymentの`replica`フィールドが満たされていない場合に、新しい {{< glossary_tooltip text="pod" term_id="pod">}} を起動する等)。 +マスターコンポーネントは、クラスターに関する全体的な決定(スケジューリングなど)を行います。また、クラスターイベントの検出および応答を行います(たとえば、deploymentの`replicas`フィールドが満たされていない場合に、新しい {{< glossary_tooltip text="pod" term_id="pod">}} を起動する等)。 マスターコンポーネントはクラスター内のどのマシンでも実行できますが、シンプルにするため、セットアップスクリプトは通常、すべてのマスターコンポーネントを同じマシンで起動し、そのマシンではユーザーコンテナを実行しません。 マルチマスター VMセットアップの例については、[高可用性クラスターの構築](/docs/admin/high-availability/) を参照してください。 @@ -80,7 +80,7 @@ cloud-controller-managerを使用すると、クラウドベンダーのコー {{< glossary_definition term_id="kube-proxy" length="all" >}} -### コンテナランタイム +### コンテナランタイム {#container-runtime} {{< glossary_definition term_id="container-runtime" length="all" >}} diff --git a/content/ja/docs/concepts/overview/kubernetes-api.md b/content/ja/docs/concepts/overview/kubernetes-api.md index 01a41309de61d..d7851d954b0c4 100644 --- a/content/ja/docs/concepts/overview/kubernetes-api.md +++ b/content/ja/docs/concepts/overview/kubernetes-api.md @@ -10,7 +10,7 @@ card: {{% capture overview %}} -全般的なAPIの規則は、[API規則ドキュメント](https://git.k8s.io/community/contributors/devel/api-conventions.md)に記載されています。 +全般的なAPIの規則は、[API規則ドキュメント](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md)に記載されています。 APIエンドポイント、リソースタイプ、そしてサンプルは[APIリファレンス](/docs/reference)に記載されています。 diff --git a/content/ja/docs/concepts/overview/what-is-kubernetes.md b/content/ja/docs/concepts/overview/what-is-kubernetes.md index 776642c92f890..6299002ac6cb1 100644 --- a/content/ja/docs/concepts/overview/what-is-kubernetes.md +++ b/content/ja/docs/concepts/overview/what-is-kubernetes.md @@ -36,7 +36,7 @@ Kubernetesが多くの機能を提供すると言いつつも、新しい機能 [ラベル](/docs/concepts/overview/working-with-objects/labels/)を使用すると、ユーザーは自分のリソースを整理できます。[アノテーション](/docs/concepts/overview/working-with-objects/annotations/)を使用すると、ユーザーは自分のワークフローを容易にし、管理ツールが状態をチェックするための簡単な方法を提供するためにカスタムデータを使ってリソースを装飾できるようになります。 -さらに、[Kubernetesコントロールプレーン](/docs/concepts/overview/components/)は、開発者やユーザーが使える[API](/docs/reference/using-api/api-overview/)の上で成り立っています。ユーザーは[スケジューラー](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/scheduler.md)などの独自のコントローラーを、汎用の[コマンドラインツール](/docs/user-guide/kubectl-overview/)で使える[独自のAPI](/docs/concepts/api-extension/custom-resources/)を持たせて作成することができます。 +さらに、[Kubernetesコントロールプレーン](/ja/docs/concepts/overview/components/)は、開発者やユーザーが使える[API](/docs/reference/using-api/api-overview/)の上で成り立っています。ユーザーは[スケジューラー](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/scheduler.md)などの独自のコントローラーを、汎用の[コマンドラインツール](/docs/user-guide/kubectl-overview/)で使える[独自のAPI](/docs/concepts/api-extension/custom-resources/)を持たせて作成することができます。 この[デザイン](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md)によって、他の多くのシステムがKubernetes上で構築できるようになりました。 @@ -98,7 +98,7 @@ Kubernetesは... {{% capture whatsnext %}} * [はじめる](/docs/setup/)準備はできましたか? -* さらなる詳細については、[Kubernetesのドキュメント](/docs/home/)を御覧ください。 +* さらなる詳細については、[Kubernetesのドキュメント](/ja/docs/home/)を御覧ください。 {{% /capture %}} diff --git a/content/ja/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/ja/docs/concepts/overview/working-with-objects/kubernetes-objects.md index 336d4af7ce0f9..81bee4ca68266 100644 --- a/content/ja/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/ja/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -67,6 +67,6 @@ Kubernetesオブジェクトを`.yaml`ファイルに記載して作成する場 {{% capture whatsnext %}} -* 最も重要、かつ基本的なKubernetesオブジェクト群を学びましょう、例えば、[Pod](/docs/concepts/workloads/pods/pod-overview/)です。 +* 最も重要、かつ基本的なKubernetesオブジェクト群を学びましょう、例えば、[Pod](/ja/docs/concepts/workloads/pods/pod-overview/)です。 {{% /capture %}} diff --git a/content/ja/docs/concepts/overview/working-with-objects/labels.md b/content/ja/docs/concepts/overview/working-with-objects/labels.md index 7451cf35c252e..9d4274275999f 100644 --- a/content/ja/docs/concepts/overview/working-with-objects/labels.md +++ b/content/ja/docs/concepts/overview/working-with-objects/labels.md @@ -116,7 +116,7 @@ spec: ### *集合ベース(Set-based)* の要件(requirement) -*集合ベース(Set-based)* のラベルの要件は値のセットによってキーをフィルタリングします。 +*集合ベース(Set-based)* のラベルの要件は値のセットによってキーをフィルタリングします。 `in`、`notin`、`exists`の3つのオペレーターをサポートしています(キーを特定するのみ)。 例えば: @@ -198,7 +198,7 @@ selector: #### *集合ベース* の要件指定をサポートするリソース -[`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/)や[`Deployment`](/docs/concepts/workloads/controllers/deployment/)、[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/)や[`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/)などの比較的新しいリソースは、*集合ベース* での要件指定もサポートしています。 +[`Job`](/docs/concepts/workloads/controllers/jobs-run-to-completion/)や[`Deployment`](/ja/docs/concepts/workloads/controllers/deployment/)、[`ReplicaSet`](/ja/docs/concepts/workloads/controllers/replicaset/)や[`DaemonSet`](/ja/docs/concepts/workloads/controllers/daemonset/)などの比較的新しいリソースは、*集合ベース* での要件指定もサポートしています。 ```yaml selector: matchLabels: @@ -214,6 +214,6 @@ selector: #### Nodeのセットを選択する ラベルを選択するための1つのユースケースはPodがスケジュールできるNodeのセットを制限することです。 -さらなる情報に関しては、[Node選定](/docs/concepts/configuration/assign-pod-node/) のドキュメントを参照してください。 +さらなる情報に関しては、[Node選定](/ja/docs/concepts/configuration/assign-pod-node/) のドキュメントを参照してください。 {{% /capture %}} diff --git a/content/ja/docs/concepts/scheduling/kube-scheduler.md b/content/ja/docs/concepts/scheduling/kube-scheduler.md index 1b69474896f87..53fd5c67b7118 100644 --- a/content/ja/docs/concepts/scheduling/kube-scheduler.md +++ b/content/ja/docs/concepts/scheduling/kube-scheduler.md @@ -98,7 +98,7 @@ kube-schedulerは、デフォルトで用意されているスケジューリン - `NodePreferAvoidPodsPriority`: Nodeの`scheduler.alpha.kubernetes.io/preferAvoidPods`というアノテーションに基づいてNodeの優先順位づけをします。この設定により、2つの異なるPodが同じNode上で実行しないことを示唆できます。 -- `NodeAffinityPriority`: "PreferredDuringSchedulingIgnoredDuringExecution"の値によって示されたNode Affinityのスケジューリング性向に基づいてNodeの優先順位づけを行います。詳細は[NodeへのPodの割り当て](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/)にて確認できます。 +- `NodeAffinityPriority`: "PreferredDuringSchedulingIgnoredDuringExecution"の値によって示されたNode Affinityのスケジューリング性向に基づいてNodeの優先順位づけを行います。詳細は[NodeへのPodの割り当て](https://kubernetes.io/ja/docs/concepts/configuration/assign-pod-node/)にて確認できます。 - `TaintTolerationPriority`: Node上における許容できないTaintsの数に基づいて、全てのNodeの優先順位リストを準備します。このポリシーでは優先順位リストを考慮してNodeのランクを調整します。 @@ -106,7 +106,7 @@ kube-schedulerは、デフォルトで用意されているスケジューリン - `ServiceSpreadingPriority`: このポリシーの目的は、特定のServiceに対するバックエンドのPodが、それぞれ異なるNodeで実行されるようにすることです。このポリシーではServiceのバックエンドのPodが既に実行されていないNode上にスケジュールするように優先します。これによる結果として、Serviceは単体のNode障害に対してより耐障害性が高まります。 -- `CalculateAntiAffinityPriorityMap`: このポリシーは[PodのAnti-Affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)の実装に役立ちます。 +- `CalculateAntiAffinityPriorityMap`: このポリシーは[PodのAnti-Affinity](https://kubernetes.io/ja/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)の実装に役立ちます。 - `EqualPriorityMap`: 全てのNodeに対して等しい重みを与えます。 diff --git a/content/ja/docs/concepts/services-networking/connect-applications-service.md b/content/ja/docs/concepts/services-networking/connect-applications-service.md index 1b3ac2e810ff5..e1250bbdeaf96 100644 --- a/content/ja/docs/concepts/services-networking/connect-applications-service.md +++ b/content/ja/docs/concepts/services-networking/connect-applications-service.md @@ -105,7 +105,7 @@ my-nginx ClusterIP 10.0.162.149 80/TCP 21s 前述のように、ServiceはPodのグループによってサポートされています。 これらのPodはエンドポイントを通じて公開されます。 -Serviceのセレクターは継続的に評価され、結果は`my-nginx`という名前のEndpointオブジェクトにPOSTされます。 +Serviceのセレクターは継続的に評価され、結果は`my-nginx`という名前のEndpointsオブジェクトにPOSTされます。 Podが終了すると、エンドポイントから自動的に削除され、Serviceのセレクターに一致する新しいPodが自動的にエンドポイントに追加されます。 エンドポイントを確認し、IPが最初のステップで作成されたPodと同じであることを確認します: @@ -135,7 +135,7 @@ my-nginx 10.244.2.5:80,10.244.3.4:80 1m クラスター内の任意のノードから、`:`でnginx Serviceにcurl接続できるようになりました。 Service IPは完全に仮想的なもので、ホスト側のネットワークには接続できないことに注意してください。 -この仕組みに興味がある場合は、[サービスプロキシー](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)の詳細をお読みください。 +この仕組みに興味がある場合は、[サービスプロキシー](/ja/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)の詳細をお読みください。 ## Serviceにアクセスする diff --git a/content/ja/docs/concepts/services-networking/dns-pod-service.md b/content/ja/docs/concepts/services-networking/dns-pod-service.md index 558f1d7b033ec..fa76965e8e381 100644 --- a/content/ja/docs/concepts/services-networking/dns-pod-service.md +++ b/content/ja/docs/concepts/services-networking/dns-pod-service.md @@ -36,7 +36,7 @@ Kubernetesの`bar`というネームスペース内で`foo`という名前のSer ### SRVレコード SRVレコードは、通常のServiceもしくは[Headless -Services](/docs/concepts/services-networking/service/#headless-services)の一部である名前付きポート向けに作成されます。それぞれの名前付きポートに対して、そのSRVレコードは`_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster.local`という形式となります。 +Services](/ja/docs/concepts/services-networking/service/#headless-service)の一部である名前付きポート向けに作成されます。それぞれの名前付きポートに対して、そのSRVレコードは`_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster.local`という形式となります。 通常のServiceに対しては、このSRVレコードは`my-svc.my-namespace.svc.cluster.local`という形式のドメイン名とポート番号へ名前解決します。 Headless Serviceに対しては、このSRVレコードは複数の結果を返します。それはServiceの背後にある各Podの1つを返すのと、`auto-generated-name.my-svc.my-namespace.svc.cluster.local`という形式のPodのドメイン名とポート番号を含んだ結果を返します。 diff --git a/content/ja/docs/concepts/services-networking/ingress.md b/content/ja/docs/concepts/services-networking/ingress.md index d71b87f3e25f9..7fd3a81bab935 100644 --- a/content/ja/docs/concepts/services-networking/ingress.md +++ b/content/ja/docs/concepts/services-networking/ingress.md @@ -13,7 +13,7 @@ weight: 40 ## 用語 -まずわかりやすくするために、このガイドでは次の用語を定義します。 +まずわかりやすくするために、このガイドでは次の用語を定義します。 - ノード: Kubernetes内のワーカーマシンで、クラスターの一部です。 @@ -27,7 +27,7 @@ weight: 40 ## Ingressとは何か -Ingressはクラスター外からクラスター内{{< link text="Service" url="/docs/concepts/services-networking/service/" >}}へのHTTPとHTTPSのルートを公開します。トラフィックのルーティングはIngressリソース上で定義されるルールによって制御されます。 +Ingressはクラスター外からクラスター内{{< link text="Service" url="/ja/docs/concepts/services-networking/service/" >}}へのHTTPとHTTPSのルートを公開します。トラフィックのルーティングはIngressリソース上で定義されるルールによって制御されます。 ```none internet @@ -39,7 +39,7 @@ Ingressはクラスター外からクラスター内{{< link text="Service" url= IngressはServiceに対して、外部疎通できるURL、負荷分散トラフィック、SSL/TLS終端の機能や、名前ベースの仮想ホスティングを提供するように構成できます。[Ingressコントローラー](/docs/concepts/services-networking/ingress-controllers)は通常はロードバランサーを使用してIngressの機能を実現しますが、エッジルーターや、追加のフロントエンドを構成してトラフィックの処理を支援することもできます。 -Ingressは任意のポートやプロトコルを公開しません。HTTPやHTTPS以外のServiceをインターネットに公開するときは、[Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport)や[Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer)のServiceタイプを使用することが多いです。 +Ingressは任意のポートやプロトコルを公開しません。HTTPやHTTPS以外のServiceをインターネットに公開するときは、[Service.Type=NodePort](/ja/docs/concepts/services-networking/service/#nodeport)や[Service.Type=LoadBalancer](/ja/docs/concepts/services-networking/service/#loadbalancer)のServiceタイプを使用することが多いです。 ## Ingressを使用する上での前提条件 @@ -86,7 +86,7 @@ Ingress [Spec](https://git.k8s.io/community/contributors/devel/sig-architecture/ * オプションで設定可能なホスト名。上記のリソースの例では、ホスト名が指定されていないと、そのルールは指定されたIPアドレスを経由する全てのインバウンドHTTPトラフィックに適用されます。ホスト名が指定されていると(例: foo.bar.com)、そのルールはホストに対して適用されます。 * パスのリスト(例: `/testpath`)。各パスには`serviceName`と`servicePort`で定義されるバックエンドが関連づけられます。ロードバランサーがトラフィックを関連づけられたServiceに転送するために、外部からくるリクエストのホスト名とパスが条件と一致させる必要があります。 -* [Serviceドキュメント](/docs/concepts/services-networking/service/)に書かれているように、バックエンドはServiceとポート名の組み合わせとなります。Ingressで設定されたホスト名とパスのルールに一致するHTTP(とHTTPS)のリクエストは、リスト内のバックエンドに対して送信されます。 +* [Serviceドキュメント](/ja/docs/concepts/services-networking/service/)に書かれているように、バックエンドはServiceとポート名の組み合わせとなります。Ingressで設定されたホスト名とパスのルールに一致するHTTP(とHTTPS)のリクエストは、リスト内のバックエンドに対して送信されます。 Ingressコントローラーでは、デフォルトのバックエンドが設定されていることがあります。これはSpec内で指定されているパスに一致しないようなリクエストのためのバックエンドです。 @@ -183,7 +183,7 @@ IngressコントローラーはService(`service1`、`service2`)が存在する 構築が完了すると、ADDRESSフィールドでロードバランサーのアドレスを確認できます。 {{< note >}} -ユーザーが使用している[Ingressコントローラー](/docs/concepts/services-networking/ingress-controllers)に依存しますが、ユーザーはdefault-http-backend[Service](/docs/concepts/services-networking/service/)の作成が必要な場合があります。 +ユーザーが使用している[Ingressコントローラー](/docs/concepts/services-networking/ingress-controllers)に依存しますが、ユーザーはdefault-http-backend[Service](/ja/docs/concepts/services-networking/service/)の作成が必要な場合があります。 {{< /note >}} ### 名前ベースの仮想ホスティング @@ -392,8 +392,8 @@ Ingressと関連するリソースの今後の開発については[SIG Network] Ingressリソースに直接関与しない複数の方法でServiceを公開できます。 下記の2つの使用を検討してください。 -* [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer) -* [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport) +* [Service.Type=LoadBalancer](/ja/docs/concepts/services-networking/service/#loadbalancer) +* [Service.Type=NodePort](/ja/docs/concepts/services-networking/service/#nodeport) {{% /capture %}} diff --git a/content/ja/docs/concepts/services-networking/service.md b/content/ja/docs/concepts/services-networking/service.md index 4c49ebd2a2736..c143b291c5be3 100644 --- a/content/ja/docs/concepts/services-networking/service.md +++ b/content/ja/docs/concepts/services-networking/service.md @@ -71,7 +71,7 @@ spec: Kubernetesは、このServiceに対してIPアドレス("clusterIP"とも呼ばれます)を割り当てます。これはServiceのプロキシーによって使用されます(下記の[仮想IPとServiceプロキシー](#virtual-ips-and-service-proxies)を参照ください)。 -Serviceセレクターのコントローラーはセレクターに一致するPodを継続的にスキャンし、“my-service”という名前のEndpointオブジェクトに対して変更をPOSTします。 +Serviceセレクターのコントローラーはセレクターに一致するPodを継続的にスキャンし、“my-service”という名前のEndpointsオブジェクトに対して変更をPOSTします。 {{< note >}} Serviceは`port`から`targetPort`へのマッピングを行います。デフォルトでは、利便性のために`targetPort`フィールドは`port`フィールドと同じ値で設定されます。 @@ -108,8 +108,8 @@ spec: targetPort: 9376 ``` -このServiceはセレクターがないため、対応するEndpointオブジェクトは自動的に作成されません。 -ユーザーはEndpointオブジェクトを手動で追加することにより、向き先のネットワークアドレスとポートを手動でマッピングできます。 +このServiceはセレクターがないため、対応するEndpointsオブジェクトは自動的に作成されません。 +ユーザーはEndpointsオブジェクトを手動で追加することにより、向き先のネットワークアドレスとポートを手動でマッピングできます。 ```yaml apiVersion: v1 @@ -124,10 +124,10 @@ subsets: ``` {{< note >}} -Endpointのipは、loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), や +Endpointsのipは、loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), や link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6)に設定することができません。 -{{< glossary_tooltip term_id="kube-proxy" >}}が仮想IPを最終的な到達先に設定することをサポートしていないため、Endpointのipアドレスは他のKubernetes ServiceのClusterIPにすることができません。 +{{< glossary_tooltip term_id="kube-proxy" >}}が仮想IPを最終的な到達先に設定することをサポートしていないため、Endpointsのipアドレスは他のKubernetes ServiceのClusterIPにすることができません。 {{< /note >}} セレクターなしのServiceへのアクセスは、セレクターをもっているServiceと同じようにふるまいます。上記の例では、トラフィックはYAMLファイル内で`192.0.2.42:9376` (TCP)で定義された単一のエンドポイントにルーティングされます。 @@ -135,7 +135,7 @@ link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6)に設 ExternalName Serviceはセレクターの代わりにDNS名を使用する特殊なケースのServiceです。さらなる情報は、このドキュメントの後で紹介する[ExternalName](#externalname)を参照ください。 ### エンドポイントスライス -{{< feature-state for_k8s_version="v1.16" state="alpha" >}} +{{< feature-state for_k8s_version="v1.17" state="beta" >}} エンドポイントスライスは、Endpointsに対してよりスケーラブルな代替手段を提供できるAPIリソースです。概念的にはEndpointsに非常に似ていますが、エンドポイントスライスを使用すると、ネットワークエンドポイントを複数のリソースに分割できます。デフォルトでは、エンドポイントスライスは、100個のエンドポイントに到達すると「いっぱいである」と見なされ、その時点で追加のエンドポイントスライスが作成され、追加のエンドポイントが保存されます。 @@ -158,8 +158,8 @@ Serviceにおいてプロキシーを使う理由はいくつかあります。 ### user-spaceプロキシーモード {#proxy-mode-userspace} -このモードでは、kube-proxyはServiceやEndpointオブジェクトの追加・削除をチェックするために、Kubernetes Masterを監視します。 -各Serviceは、ローカルのNode上でポート(ランダムに選ばれたもの)を公開します。この"プロキシーポート"に対するどのようなリクエストも、そのServiceのバックエンドPodのどれか1つにプロキシーされます(Endpointを介して通知されたPodに対して)。 +このモードでは、kube-proxyはServiceやEndpointsオブジェクトの追加・削除をチェックするために、Kubernetes Masterを監視します。 +各Serviceは、ローカルのNode上でポート(ランダムに選ばれたもの)を公開します。この"プロキシーポート"に対するどのようなリクエストも、そのServiceのバックエンドPodのどれか1つにプロキシーされます(Endpointsを介して通知されたPodに対して)。 kube-proxyは、どのバックエンドPodを使うかを決める際にServiceの`SessionAffinity`項目の設定を考慮に入れます。 最後に、user-spaceプロキシーはServiceの`clusterIP`(仮想IP)と`port`に対するトラフィックをキャプチャするiptablesルールをインストールします。 @@ -171,9 +171,9 @@ kube-proxyは、どのバックエンドPodを使うかを決める際にService ### `iptables`プロキシーモード {#proxy-mode-iptables} -このモードでは、kube-proxyはServiceやEndpointオブジェクトの追加・削除のチェックのためにKubernetesコントロールプレーンを監視します。 +このモードでは、kube-proxyはServiceやEndpointsオブジェクトの追加・削除のチェックのためにKubernetesコントロールプレーンを監視します。 各Serviceでは、そのServiceの`clusterIP`と`port`に対するトラフィックをキャプチャするiptablesルールをインストールし、そのトラフィックをServiceのあるバックエンドのセットに対してリダイレクトします。 -各Endpointオブジェクトは、バックエンドのPodを選択するiptablesルールをインストールします。 +各Endpointsオブジェクトは、バックエンドのPodを選択するiptablesルールをインストールします。 デフォルトでは、iptablesモードにおけるkube-proxyはバックエンドPodをランダムで選択します。 @@ -191,7 +191,7 @@ iptablesモードのkube-proxyが正常なバックエンドPodのみをリダ {{< feature-state for_k8s_version="v1.11" state="stable" >}} -`ipvs`モードにおいて、kube-proxyはServiceとEndpointオブジェクトを監視し、IPVSルールを作成するために`netlink`インターフェースを呼び出し、定期的にKubernetesのServiceとEndpointとIPVSルールを同期させます。 +`ipvs`モードにおいて、kube-proxyはServiceとEndpointsオブジェクトを監視し、IPVSルールを作成するために`netlink`インターフェースを呼び出し、定期的にKubernetesのServiceとEndpointsとIPVSルールを同期させます。 このコントロールループはIPVSのステータスが理想的な状態になることを保証します。 Serviceにアクセスするとき、IPVSはトラフィックをバックエンドのPodに向けます。 @@ -320,15 +320,15 @@ KubernetesのDNSサーバーは`ExternalName` Serviceにアクセスする唯一 ### ラベルセレクターの利用 -ラベルセレクターを定義したHeadless Serviceにおいて、EndpointコントローラーはAPIにおいて`Endpoints`レコードを作成し、`Service`のバックエンドにある`Pod`へのIPを直接指し示すためにDNS設定を修正します。 +ラベルセレクターを定義したHeadless Serviceにおいて、EndpointsコントローラーはAPIにおいて`Endpoints`レコードを作成し、`Service`のバックエンドにある`Pod`へのIPを直接指し示すためにDNS設定を修正します。 ### ラベルセレクターなしの場合 -ラベルセレクターを定義しないHeadless Serviceにおいては、Endpoint コントローラーは`Endpoint`レコードを作成しません。 +ラベルセレクターを定義しないHeadless Serviceにおいては、Endpointsコントローラーは`Endpoints`レコードを作成しません。 しかしDNSのシステムは下記の2つ両方を探索し、設定します。 * [`ExternalName`](#externalname)タイプのServiceに対するCNAMEレコード - * 他の全てのServiceタイプを含む、Service名を共有している全ての`Endpoint`レコード + * 他の全てのServiceタイプを含む、Service名を共有している全ての`Endpoints`レコード ## Serviceの公開 (Serviceのタイプ) {#publishing-services-service-types} @@ -659,9 +659,9 @@ NLBは特定のインスタンスクラスでのみ稼働します。サポー `.spec.externalTrafficPolicy`を`Local`に設定することにより、クライアントIPアドレスは末端のPodに伝播します。しかし、これにより、トラフィックの分配が不均等になります。 特定のLoadBalancer Serviceに紐づいたPodがないNodeでは、自動的に割り当てられた`.spec.healthCheckNodePort`に対するNLBのターゲットグループのヘルスチェックが失敗し、トラフィックを全く受信しません。 -均等なトラフィックの分配を実現するために、DaemonSetの使用や、同一のNodeに配備しないように[Podのanti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)を設定します。 +均等なトラフィックの分配を実現するために、DaemonSetの使用や、同一のNodeに配備しないように[Podのanti-affinity](/ja/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)を設定します。 -また、[内部のロードバランサー](/docs/concepts/services-networking/service/#internal-load-balancer)のアノテーションとNLB Serviceを使用できます。 +また、[内部のロードバランサー](/ja/docs/concepts/services-networking/service/#internal-load-balancer)のアノテーションとNLB Serviceを使用できます。 NLBの背後にあるインスタンスに対してクライアントのトラフィックを転送するために、Nodeのセキュリティーグループは下記のようなIPルールに従って変更されます。 @@ -742,7 +742,7 @@ IPアドレスをハードコードする場合、[Headless Service](#headless-s `my-service.prod.svc.cluster.local`というホストをルックアップするとき、クラスターのDNS Serviceは`CNAME`レコードと`my.database.example.com`という値を返します。 `my-service`へのアクセスは、他のServiceと同じ方法ですが、再接続する際はプロキシーや転送を介して行うよりも、DNSレベルで行われることが決定的に異なる点となります。 -後にユーザーが使用しているデータベースをクラスター内に移行することになった後は、Podを起動させ、適切なラベルセレクターやEndpointを追加し、Serviceの`type`を変更します。 +後にユーザーが使用しているデータベースをクラスター内に移行することになった後は、Podを起動させ、適切なラベルセレクターやEndpointsを追加し、Serviceの`type`を変更します。 {{< warning >}} HTTPやHTTPSなどの一般的なプロトコルでExternalNameを使用する際に問題が発生する場合があります。ExternalNameを使用する場合、クラスター内のクライアントが使用するホスト名は、ExternalNameが参照する名前とは異なります。 @@ -758,7 +758,7 @@ HTTPやHTTPSなどの一般的なプロトコルでExternalNameを使用する ### External IPs もし1つ以上のクラスターNodeに転送するexternalIPが複数ある場合、Kubernetes Serviceは`externalIPs`に指定したIPで公開されます。 -そのexternalIP(到達先のIPとして扱われます)のServiceのポートからトラフィックがクラスターに入って来る場合、ServiceのEndpointのどれか1つに対して転送されます。 +そのexternalIP(到達先のIPとして扱われます)のServiceのポートからトラフィックがクラスターに入って来る場合、ServiceのEndpointsのどれか1つに対して転送されます。 `externalIPs`はKubernetesによって管理されず、それを管理する責任はクラスターの管理者にあります。 Serviceのspecにおいて、`externalIPs`は他のどの`ServiceTypes`と併用して設定できます。 @@ -815,7 +815,7 @@ Kubernetesは各Serviceに、それ自身のIPアドレスを割り当てるこ 実際に固定された向き先であるPodのIPアドレスとは異なり、ServiceのIPは実際には単一のホストによって応答されません。 その代わり、kube-proxyは必要な時に透過的にリダイレクトされる_仮想_ IPアドレスを定義するため、iptables(Linuxのパケット処理ロジック)を使用します。 -クライアントがVIPに接続する時、そのトラフィックは自動的に適切なEndpointに転送されます。 +クライアントがVIPに接続する時、そのトラフィックは自動的に適切なEndpointsに転送されます。 Service用の環境変数とDNSは、Serviceの仮想IPアドレス(とポート)の面において、自動的に生成されます。 kube-proxyは3つの微妙に異なった動作をするプロキシーモード— userspace、iptablesとIPVS — をサポートしています。 @@ -838,7 +838,7 @@ kube-proxyが新しいServiceを見つけた時、kube-proxyは新しいラン また画像処理のアプリケーションについて考えます。バックエンドServiceが作成された時、そのKubernetesコントロールプレーンは仮想IPアドレスを割り当てます。例えば10.0.0.1などです。 Serviceのポートが1234で、そのServiceがクラスター内のすべてのkube-proxyインスタンスによって監視されていると仮定します。 kube-proxyが新しいServiceを見つけた時、kube-proxyは仮想IPから各Serviceのルールにリダイレクトされるような、iptablesルールのセットをインストールします。 -Service毎のルールは、トラフィックをバックエンドにリダイレクト(Destination NATを使用)しているEndpoint毎のルールに対してリンクしています。 +Service毎のルールは、トラフィックをバックエンドにリダイレクト(Destination NATを使用)しているEndpoints毎のルールに対してリンクしています。 クライアントがServiceの仮想IPアドレスに対して接続しているとき、そのiptablesルールが有効になります。 バックエンドのPodが選択され(SessionAffinityに基づくか、もしくはランダムで選択される)、パケットはバックエンドにリダイレクトされます。 @@ -874,7 +874,7 @@ ServiceはKubernetesのREST APIにおいてトップレベルのリソースで {{< feature-state for_k8s_version="v1.1" state="stable" >}} -もしクラウドプロバイダーがサポートしている場合、ServiceのEndpointに転送される外部のHTTP/HTTPSでのリバースプロキシーをセットアップするために、LoadBalancerモードでServiceを作成可能です。 +もしクラウドプロバイダーがサポートしている場合、ServiceのEndpointsに転送される外部のHTTP/HTTPSでのリバースプロキシーをセットアップするために、LoadBalancerモードでServiceを作成可能です。 {{< note >}} ユーザーはまた、HTTP / HTTPS Serviceを公開するために、Serviceの代わりに{{< glossary_tooltip term_id="ingress" >}}を利用することもできます。 @@ -898,9 +898,9 @@ PROXY TCP4 192.0.2.202 10.0.42.7 12345 7\r\n {{< feature-state for_k8s_version="v1.12" state="alpha" >}} -KubernetseはService、Endpoint、NetworkPolicyとPodの定義においてα版の機能として`protocol`フィールドの値でSCTPをサポートしています。この機能を有効にするために、クラスター管理者はAPI Serverにおいて`SCTPSupport`というFeature Gateを有効にする必要があります。例えば、`--feature-gates=SCTPSupport=true,…`といったように設定します。 +KubernetseはService、Endpoints、NetworkPolicyとPodの定義においてα版の機能として`protocol`フィールドの値でSCTPをサポートしています。この機能を有効にするために、クラスター管理者はAPI Serverにおいて`SCTPSupport`というFeature Gateを有効にする必要があります。例えば、`--feature-gates=SCTPSupport=true,…`といったように設定します。 -そのFeature Gateが有効になった時、ユーザーはService、Endpoint、NetworkPolicyの`protocol`フィールドと、Podの`SCTP`フィールドを設定できます。 +そのFeature Gateが有効になった時、ユーザーはService、Endpoints、NetworkPolicyの`protocol`フィールドと、Podの`SCTP`フィールドを設定できます。 Kubernetesは、TCP接続と同様に、SCTPアソシエーションに応じてネットワークをセットアップします。 #### 警告 {#caveat-sctp-overview} diff --git a/content/ja/docs/concepts/storage/persistent-volumes.md b/content/ja/docs/concepts/storage/persistent-volumes.md new file mode 100644 index 0000000000000..d7969f7d993cb --- /dev/null +++ b/content/ja/docs/concepts/storage/persistent-volumes.md @@ -0,0 +1,661 @@ +--- +title: 永続ボリューム +feature: + title: ストレージオーケストレーション + description: > + ローカルストレージやGCPAWSなどのパブリッククラウドプロバイダー、もしくはNFS、iSCSI、Gluster、Ceph、Cinder、Flockerのようなネットワークストレージシステムの中から選択されたものを自動的にマウントします。 + +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +このドキュメントではKubernetesの`PersistentVolume`について説明します。[ボリューム](/docs/concepts/storage/volumes/)を一読することをおすすめします。 + +{{% /capture %}} + + +{{% capture body %}} + +## 概要 + +ストレージを管理することはインスタンスを管理することとは全くの別物です。`PersistentVolume`サブシステムは、ストレージが何から提供されているか、どのように消費されているかをユーザーと管理者から抽象化するAPIを提供します。これを実現するための`PersistentVolume`と`PersistentVolumeClaim`という2つの新しいAPIリソースを紹介します。 + +`PersistentVolume`(PV)は[ストレージクラス](/docs/concepts/storage/storage-classes/)を使って管理者もしくは動的にプロビジョニングされるクラスターのストレージの一部です。これはNodeと同じようにクラスターリソースの一部です。PVはVolumeのようなボリュームプラグインですが、PVを使う個別のPodとは独立したライフサイクルを持っています。このAPIオブジェクトはNFS、iSCSIやクラウドプロバイダー固有のストレージシステムの実装の詳細を捕捉します。 + +`PersistentVolumeClaim`(PVC)はユーザーによって要求されるストレージです。これはPodと似ています。PodはNodeリソースを消費し、PVCはPVリソースを消費します。Podは特定レベルのCPUとメモリーリソースを要求することができます。クレームは特定のサイズやアクセスモード(例えば、1ノードからのみ読み書きマウントができるモードや、複数ノードから読み込み専用マウントができるモードなどです)を要求することができます。 + +`PersistentVolumeClaim`はユーザーに抽象化されたストレージリソースの消費を許可する一方、ユーザーは色々な問題に対処するためにパフォーマンスといった様々なプロパティを持った`PersistentVolume`を必要とすることは一般的なことです。クラスター管理者はユーザーに様々なボリュームがどのように実装されているかを表に出すことなく、サイズやアクセスモードだけではない色々な点で異なった、様々な`PersistentVolume`を提供できる必要があります。これらのニーズに応えるために`StorageClass`リソースがあります。 + +[実例を含む詳細なチュートリアル](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/)を参照して下さい。 + + +## ボリュームと要求のライフサイクル + +PVはクラスター内のリソースです。PVCはこれらのリソースの要求でありまた、クレームのチェックとしても機能します。PVとPVCの相互作用はこのライフサイクルに従います。 + +### プロビジョニング + +PVは静的か動的どちらかでプロビジョニングされます。 + +#### 静的 + +クラスター管理者は多数のPVを作成します。それらはクラスターのユーザーが使うことのできる実際のストレージの詳細を保持します。それらはKubernetes APIに存在し、利用できます。 + +#### 動的 + +ユーザーの`PersistentVolumeClaim`が管理者の作成したいずれの静的PVにも一致しない場合、クラスターはPVC用にボリュームを動的にプロビジョニングしようとする場合があります。 +このプロビジョニングは`StorageClass`に基づいています。PVCは[ストレージクラス](/docs/concepts/storage/storage-classes/)の要求が必要であり、管理者は動的プロビジョニングを行うためにストレージクラスの作成・設定が必要です。ストレージクラスを""にしたストレージ要求は、自身の動的プロビジョニングを事実上無効にします。 + +ストレージクラスに基づいたストレージの動的プロビジョニングを有効化するには、クラスター管理者が`DefaultStorageClass`[アドミッションコントローラー](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)をAPIサーバーで有効化する必要があります。 +これは例えば、`DefaultStorageClass`がAPIサーバーコンポーネントの`--enable-admission-plugins`フラグのコンマ区切りの順序付きリストの中に含まれているかで確認できます。 +APIサーバーのコマンドラインフラグの詳細については[kube-apiserver](/docs/admin/kube-apiserver/)のドキュメントを参照してください。 + +### バインディング + +ユーザは、特定のサイズのストレージとアクセスモードを指定した上で`PersistentVolumeClaim`を作成します(動的プロビジョニングの場合は、すでに作られています)。マスター内のコントロールループは、新しく作られるPVCをウォッチして、それにマッチするPVが見つかったときに、それらを紐付けます。PVが新しいPVC用に動的プロビジョニングされた場合、コントロールループは常にPVをそのPVCに紐付けます。そうでない場合、ユーザーは常に少なくとも要求したサイズ以上のボリュームを取得しますが、ボリュームは要求されたサイズを超えている可能性があります。一度紐付けされると、どのように紐付けられたかに関係なく`PersistentVolumeClaim`の紐付けは排他的(決められた特定のPVとしか結びつかない状態)になります。PVCからPVへの紐付けは1対1です。 + +一致するボリュームが存在しない場合、クレームはいつまでも紐付けされないままになります。一致するボリュームが利用可能になると、クレームがバインドされます。たとえば、50GiのPVがいくつもプロビジョニングされているクラスターだとしても、100Giを要求するPVCとは一致しません。100GiのPVがクラスターに追加されると、PVCを紐付けできます。 + +### 使用 + +Podは要求をボリュームとして使用します。クラスターは、要求を検査して紐付けられたボリュームを見つけそのボリュームをPodにマウントします。複数のアクセスモードをサポートするボリュームの場合、ユーザーはPodのボリュームとしてクレームを使う時にどのモードを希望するかを指定します。 + +ユーザーがクレームを取得し、そのクレームがバインドされると、バインドされたPVは必要な限りそのユーザーに属します。ユーザーはPodをスケジュールし、Podのvolumesブロックに`persistentVolumeClaim`を含めることで、バインドされたクレームのPVにアクセスします。 +[書式の詳細はこちらを確認して下さい。](#claims-as-volumes) + +### 使用中のストレージオブジェクトの保護 + +使用中のストレージオブジェクト保護機能の目的はデータ損失を防ぐために、Podによって実際に使用されている永続ボリュームクレーム(PVC)と、PVCにバインドされている永続ボリューム(PV)がシステムから削除されないようにすることです。 + +{{< note >}} +PVCを使用しているPodオブジェクトが存在する場合、PVCはPodによって実際に使用されています。 +{{< /note >}} + +ユーザーがPodによって実際に使用されているPVCを削除しても、そのPVCはすぐには削除されません。PVCの削除は、PVCがPodで使用されなくなるまで延期されます。また、管理者がPVCに紐付けられているPVを削除しても、PVはすぐには削除されません。PVがPVCに紐付けられなくなるまで、PVの削除は延期されます。 + +PVCの削除が保護されているかは、PVCのステータスが`Terminating`になっていて、そして`Finalizers`のリストに`kubernetes.io/pvc-protection`が含まれているかで確認できます。 + +```shell +kubectl describe pvc hostpath +Name: hostpath +Namespace: default +StorageClass: example-hostpath +Status: Terminating +Volume: +Labels: +Annotations: volume.beta.kubernetes.io/storage-class=example-hostpath + volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath +Finalizers: [kubernetes.io/pvc-protection] +``` + +同様にPVの削除が保護されているかは、PVのステータスが`Terminating`になっていて、そして`Finalizers`のリストに`kubernetes.io/pv-protection`が含まれているかで確認できます。 + +```shell +kubectl describe pv task-pv-volume +Name: task-pv-volume +Labels: type=local +Annotations: +Finalizers: [kubernetes.io/pv-protection] +StorageClass: standard +Status: Available +Claim: +Reclaim Policy: Delete +Access Modes: RWO +Capacity: 1Gi +Message: +Source: + Type: HostPath (bare host directory volume) + Path: /tmp/data + HostPathType: +Events: +``` + +### 再クレーム + +ユーザーは、ボリュームの使用が完了したら、リソースの再クレームを許可するAPIからPVCオブジェクトを削除できます。`PersistentVolume`の再クレームポリシーはそのクレームが解放された後のボリュームの処理をクラスターに指示します。現在、ボリュームは保持、リサイクル、または削除できます。 + +#### 保持 + +`Retain`という再クレームポリシーはリソースを手動で再クレームすることができます。`PersistentVolumeClaim`が削除される時、`PersistentVolume`は依然として存在はしますが、ボリュームは解放済みです。ただし、以前のクレームデータはボリューム上に残っているため、別のクレームにはまだ使用できません。管理者は次の手順でボリュームを手動で再クレームできます。 + +1. `PersistentVolume`を削除します。PVが削除された後も、外部インフラストラクチャー(AWS EBS、GCE PD、Azure Disk、Cinderボリュームなど)に関連付けられたストレージアセットは依然として残ります。 +1. ストレージアセットに関連するのデータを手動で適切にクリーンアップします。 +1. 関連するストレージアセットを手動で削除するか、同じストレージアセットを再利用したい場合、新しいストレージアセット定義と共に`PersistentVolume`を作成します。 + +#### 削除 + +`Delete`再クレームポリシーをサポートするボリュームプラグインの場合、削除すると`PersistentVolume`オブジェクトがKubernetesから削除されるだけでなく、AWS EBS、GCE PD、Azure Disk、Cinderボリュームなどの外部インフラストラクチャーの関連ストレージアセットも削除されます。動的にプロビジョニングされたボリュームは、[`StorageClass`の再クレームポリシー](#reclaim-policy)を継承します。これはデフォルトで削除です。管理者は、ユーザーの需要に応じて`StorageClass`を構成する必要があります。そうでない場合、PVは作成後に編集またはパッチを適用する必要があります。[PersistentVolumeの再クレームポリシーの変更](/docs/tasks/administer-cluster/change-pv-reclaim-policy/)を参照してください。 + +#### リサイクル + +{{< warning >}} +`Recycle`再クレームポリシーは廃止されました。代わりに、動的プロビジョニングを使用することをおすすめします。 +{{< /warning >}} + +基盤となるボリュームプラグインでサポートされている場合、`Recycle`再クレームポリシーはボリュームに対して基本的な削除(`rm -rf /thevolume/*`)を実行し、新しいクレームに対して再び利用できるようにします。 + +管理者は[こちら](/docs/admin/kube-controller-manager/)で説明するように、Kubernetesコントローラーマネージャーのコマンドライン引数を使用して、カスタムリサイクラーPodテンプレートを構成できます。カスタムリサイクラーPodテンプレートには、次の例に示すように、`volumes`仕様が含まれている必要があります。 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pv-recycler + namespace: default +spec: + restartPolicy: Never + volumes: + - name: vol + hostPath: + path: /any/path/it/will/be/replaced + containers: + - name: pv-recycler + image: "k8s.gcr.io/busybox" + command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"] + volumeMounts: + - name: vol + mountPath: /scrub +``` +ただし、カスタムリサイクラーPodテンプレートの`volumes`パート内で指定された特定のパスは、リサイクルされるボリュームの特定のパスに置き換えられます。 + +### 永続ボリュームクレームの拡大 + +{{< feature-state for_k8s_version="v1.11" state="beta" >}} + +PersistentVolumeClaim(PVC)の拡大はデフォルトで有効です。次のボリュームの種類で拡大できます。 + +* gcePersistentDisk +* awsElasticBlockStore +* Cinder +* glusterfs +* rbd +* Azure File +* Azure Disk +* Portworx +* FlexVolumes +* CSI + +そのストレージクラスの`allowVolumeExpansion`フィールドがtrueとなっている場合のみ、PVCを拡大できます。 + + +``` yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: gluster-vol-default +provisioner: kubernetes.io/glusterfs +parameters: + resturl: "http://192.168.10.100:8080" + restuser: "" + secretNamespace: "" + secretName: "" +allowVolumeExpansion: true +``` + +PVCに対してさらに大きなボリュームを要求するには、PVCオブジェクトを編集してより大きなサイズを指定します。これにより`PersistentVolume`を受け持つ基盤にボリュームの拡大がトリガーされます。クレームを満たすため新しく`PersistentVolume`が作成されることはありません。代わりに既存のボリュームがリサイズされます。 + +#### CSIボリュームの拡張 + +{{< feature-state for_k8s_version="v1.16" state="beta" >}} + +CSIボリュームの拡張のサポートはデフォルトで有効になっていますが、ボリューム拡張をサポートするにはボリューム拡張を利用できるCSIドライバーも必要です。詳細については、それぞれのCSIドライバーのドキュメントを参照してください。 + +#### ファイルシステムを含むボリュームのリサイズ + +ファイルシステムがXFS、Ext3、またはExt4の場合にのみ、ファイルシステムを含むボリュームのサイズを変更できます。 + +ボリュームにファイルシステムが含まれる場合、新しいPodが`PersistentVolumeClaim`でReadWriteモードを使用している場合にのみ、ファイルシステムのサイズが変更されます。ファイルシステムの拡張は、Podの起動時、もしくはPodの実行時で基盤となるファイルシステムがオンラインの拡張をサポートする場合に行われます。 + +FlexVolumesでは、ドライバの`RequiresFSResize`機能がtrueに設定されている場合、サイズを変更できます。 +FlexVolumeは、Podの再起動時にサイズ変更できます。 + +#### 使用中の永続ボリュームクレームのリサイズ + +{{< feature-state for_k8s_version="v1.15" state="beta" >}} + +{{< note >}} +使用中のPVCの拡張は、Kubernetes 1.15以降のベータ版と、1.11以降のアルファ版として利用可能です。`ExpandInUsePersistentVolume`機能を有効化する必要があります。これはベータ機能のため多くのクラスターで自動的に行われます。詳細については、[フィーチャーゲート](/docs/reference/command-line-tools-reference/feature-gates/)のドキュメントを参照してください。 +{{< /note >}} + +この場合、既存のPVCを使用しているPodまたはDeploymentを削除して再作成する必要はありません。使用中のPVCは、ファイルシステムが拡張されるとすぐにPodで自動的に使用可能になります。この機能は、PodまたはDeploymentで使用されていないPVCには影響しません。拡張を完了する前に、PVCを使用するPodを作成する必要があります。 + +他のボリュームタイプと同様、FlexVolumeボリュームは、Podによって使用されている最中でも拡張できます。 + +{{< note >}} +FlexVolumeのリサイズは、基盤となるドライバーがリサイズをサポートしている場合のみ可能です。 +{{< /note >}} + +{{< note >}} +EBSの拡張は時間がかかる操作です。また変更は、ボリュームごとに6時間に1回までというクォータもあります。 +{{< /note >}} + + +## 永続ボリュームの種類 + +`PersistentVolume`の種類はプラグインとして実装されます。Kubernetesは現在次のプラグインに対応しています。 + +* GCEPersistentDisk +* AWSElasticBlockStore +* AzureFile +* AzureDisk +* CSI +* FC (Fibre Channel) +* FlexVolume +* Flocker +* NFS +* iSCSI +* RBD (Ceph Block Device) +* CephFS +* Cinder (OpenStack block storage) +* Glusterfs +* VsphereVolume +* Quobyte Volumes +* HostPath (テスト用の単一ノードのみ。ローカルストレージはどのような方法でもサポートされておらず、またマルチノードクラスターでは動作しません) +* Portworx Volumes +* ScaleIO Volumes +* StorageOS + +## 永続ボリューム + +各PVには、仕様とボリュームのステータスが含まれているspecとstatusが含まれています。 + +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pv0003 +spec: + capacity: + storage: 5Gi + volumeMode: Filesystem + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Recycle + storageClassName: slow + mountOptions: + - hard + - nfsvers=4.1 + nfs: + path: /tmp + server: 172.17.0.2 +``` + +### 容量 + +通常、PVには特定のストレージ容量があります。これはPVの`capacity`属性を使用して設定されます。容量によって期待される単位を理解するためには、Kubernetesの[リソースモデル](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md)を参照してください。 + +現在、設定または要求できるのはストレージサイズのみです。将来の属性には、IOPS、スループットなどが含まれます。 + +### ボリュームモード + +{{< feature-state for_k8s_version="v1.13" state="beta" >}} + +Kubernetes 1.9より前は、すべてのボリュームプラグインが永続ボリュームにファイルシステムを作成していました。現在はRawブロックデバイスを使うために`volumeMode`の値を`block`に設定するか、ファイルシステムを使うために`filesystem`を設定できます。値が省略された場合のデフォルトは`filesystem`です。これはオプションのAPIパラメーターです。 + +### アクセスモード + +`PersistentVolume`は、リソースプロバイダーがサポートする方法でホストにマウントできます。次の表に示すように、プロバイダーにはさまざまな機能があり、各PVのアクセスモードは、その特定のボリュームでサポートされる特定のモードに設定されます。たとえば、NFSは複数の読み取り/書き込みクライアントをサポートできますが、特定のNFSのPVはサーバー上で読み取り専用としてエクスポートされる場合があります。各PVは、その特定のPVの機能を記述する独自のアクセスモードのセットを取得します。 + +アクセスモードは次の通りです。 + +* ReadWriteOnce –ボリュームは単一のNodeで読み取り/書き込みとしてマウントできます +* ReadOnlyMany –ボリュームは多数のNodeで読み取り専用としてマウントできます +* ReadWriteMany –ボリュームは多数のNodeで読み取り/書き込みとしてマウントできます + +CLIではアクセスモードは次のように略されます。 + +* RWO - ReadWriteOnce +* ROX - ReadOnlyMany +* RWX - ReadWriteMany + + +> __Important!__ ボリュームは、多数のモードをサポートしていても、一度に1つのアクセスモードでしかマウントできません。たとえば、GCEPersistentDiskは、単一NodeではReadWriteOnceとして、または多数のNodeではReadOnlyManyとしてマウントできますが、同時にマウントすることはできません。 + + +| ボリュームプラグイン | ReadWriteOnce | ReadOnlyMany | ReadWriteMany | +| :--- | :---: | :---: | :---: | +| AWSElasticBlockStore | ✓ | - | - | +| AzureFile | ✓ | ✓ | ✓ | +| AzureDisk | ✓ | - | - | +| CephFS | ✓ | ✓ | ✓ | +| Cinder | ✓ | - | - | +| CSI | ドライバーに依存 | ドライバーに依存 | ドライバーに依存 | +| FC | ✓ | ✓ | - | +| FlexVolume | ✓ | ✓ | ドライバーに依存 | +| Flocker | ✓ | - | - | +| GCEPersistentDisk | ✓ | ✓ | - | +| Glusterfs | ✓ | ✓ | ✓ | +| HostPath | ✓ | - | - | +| iSCSI | ✓ | ✓ | - | +| Quobyte | ✓ | ✓ | ✓ | +| NFS | ✓ | ✓ | ✓ | +| RBD | ✓ | ✓ | - | +| VsphereVolume | ✓ | - | - (Podが連結されている場合のみ) | +| PortworxVolume | ✓ | - | ✓ | +| ScaleIO | ✓ | ✓ | - | +| StorageOS | ✓ | - | - | + +### Class + +PVはクラスを持つことができます。これは`storageClassName`属性を[ストレージクラス](/docs/concepts/storage/storage-classes/)の名前に設定することで指定されます。特定のクラスのPVは、そのクラスを要求するPVCにのみバインドできます。`storageClassName`にクラスがないPVは、特定のクラスを要求しないPVCにのみバインドできます。 + +以前`volume.beta.kubernetes.io/storage-class`アノテーションは、`storageClassName`属性の代わりに使用されていました。このアノテーションはまだ機能しています。ただし、将来のKubernetesリリースでは完全に非推奨です。 + +### 再クレームポリシー {#reclaim-policy} + +現在の再クレームポリシーは次のとおりです。 + +* 保持 -- 手動再クレーム +* リサイクル -- 基本的な削除 (`rm -rf /thevolume/*`) +* 削除 -- AWS EBS、GCE PD、Azure Disk、もしくはOpenStack Cinderボリュームに関連するストレージアセットを削除 + +現在、NFSとHostPathのみがリサイクルをサポートしています。AWS EBS、GCE PD、Azure Disk、およびCinder volumeは削除をサポートしています。 + +### マウントオプション + +Kubernets管理者は永続ボリュームがNodeにマウントされるときの追加マウントオプションを指定できます。 + +{{< note >}} +すべての永続ボリュームタイプがすべてのマウントオプションをサポートするわけではありません。 +{{< /note >}} + +次のボリュームタイプがマウントオプションをサポートしています。 + +* AWSElasticBlockStore +* AzureDisk +* AzureFile +* CephFS +* Cinder (OpenStackブロックストレージ) +* GCEPersistentDisk +* Glusterfs +* NFS +* Quobyte Volumes +* RBD (Ceph Block Device) +* StorageOS +* VsphereVolume +* iSCSI + +マウントオプションは検証されないため、不正だった場合マウントは失敗します。 + +以前`volume.beta.kubernetes.io/mount-options`アノテーションが`mountOptions`属性の代わりに使われていました。このアノテーションはまだ機能しています。ただし、将来のKubernetesリリースでは完全に非推奨です。 + +### ノードアフィニティ + +{{< note >}} +ほとんどのボリュームタイプはこのフィールドを設定する必要がありません。[AWS EBS](/docs/concepts/storage/volumes/#awselasticblockstore)、[GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk)、もしくは[Azure Disk](/docs/concepts/storage/volumes/#azuredisk)ボリュームブロックタイプの場合自動的に設定されます。[local](/docs/concepts/storage/volumes/#local)ボリュームは明示的に設定する必要があります。 +{{< /note >}} + +PVは[ノードアフィニティ](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volumenodeaffinity-v1-core)を指定して、このボリュームにアクセスできるNodeを制限する制約を定義できます。PVを使用するPodは、ノードアフィニティによって選択されたNodeにのみスケジュールされます。 + +### フェーズ + +ボリュームは次のフェーズのいずれかです。 + +* 利用可能 -- まだクレームに紐付いていない自由なリソース +* バウンド -- クレームに紐付いている +* リリース済み -- クレームが削除されたが、クラスターにまだクレームされている +* 失敗 -- 自動再クレームに失敗 + +CLIにはPVに紐付いているPVCの名前が表示されます。 + +## 永続ボリューム要求 + +各PVCにはspecとステータスが含まれます。これは、仕様とクレームのステータスです。 + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: myclaim +spec: + accessModes: + - ReadWriteOnce + volumeMode: Filesystem + resources: + requests: + storage: 8Gi + storageClassName: slow + selector: + matchLabels: + release: "stable" + matchExpressions: + - {key: environment, operator: In, values: [dev]} +``` + +### アクセスモード + +クレームは、特定のアクセスモードでストレージを要求するときにボリュームと同じ規則を使用します。 + +### ボリュームモード + +クレームは、ボリュームと同じ規則を使用して、ファイルシステムまたはブロックデバイスとしてのボリュームの消費を示します。 + +### リソース + +Podと同様に、クレームは特定の量のリソースを要求できます。この場合、要求はストレージ用です。同じ[リソースモデル](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md)がボリュームとクレームの両方に適用されます。 + +### セレクター + +クレームでは、[ラベルセレクター](/docs/concepts/overview/working-with-objects/labels/#label-selectors)を指定して、ボリュームセットをさらにフィルター処理できます。ラベルがセレクターに一致するボリュームのみがクレームにバインドできます。セレクターは2つのフィールドで構成できます。 + +* `matchLabels` - ボリュームはこの値のラベルが必要です +* `matchExpressions` - キー、値のリスト、およびキーと値を関連付ける演算子を指定することによって作成された要件のリスト。有効な演算子は、In、NotIn、ExistsおよびDoesNotExistです。 + +`matchLabels`と`matchExpressions`の両方からのすべての要件はANDで結合されます。一致するには、すべてが一致する必要があります。 + +### クラス + +クレームは、`storageClassName`属性を使用して[ストレージクラス](/docs/concepts/storage/storage-classes/)の名前を指定することにより、特定のクラスを要求できます。PVCにバインドできるのは、PVCと同じ`storageClassName`を持つ、要求されたクラスのPVのみです。 + +PVCは必ずしもクラスをリクエストする必要はありません。`storageClassName`が`""`に設定されているPVCは、クラスのないPVを要求していると常に解釈されるため、クラスのないPVにのみバインドできます(アノテーションがないか、`""`に等しい1つのセット)。`storageClassName`のないPVCはまったく同じではなく、[`DefaultStorageClass`アドミッションプラグイン](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)がオンになっているかどうかによって、クラスターによって異なる方法で処理されます。 + +* アドミッションプラグインがオンになっている場合、管理者はデフォルトの`StorageClass`を指定できます。`storageClassName`を持たないすべてのPVCは、そのデフォルトのPVにのみバインドできます。デフォルトの`StorageClass`の指定は、`StorageClass`オブジェクトで`storageclass.kubernetes.io/is-default-class`アノテーションを`true`に設定することにより行われます。管理者がデフォルトを指定しない場合、クラスターは、アドミッションプラグインがオフになっているかのようにPVC作成をレスポンスします。複数のデフォルトが指定されている場合、アドミッションプラグインはすべてのPVCの作成を禁止します。 +* アドミッションプラグインがオフになっている場合、デフォルトの`StorageClass`の概念はありません。`storageClassName`を持たないすべてのPVCは、クラスを持たないPVにのみバインドできます。この場合、storageClassNameを持たないPVCは、`storageClassName`が`""`に設定されているPVCと同じように扱われます。 + +インストール方法によっては、インストール時にアドオンマネージャーによってデフォルトのストレージクラスがKubernetesクラスターにデプロイされる場合があります。 + +PVCが`selector`を要求することに加えて`StorageClass`を指定する場合、要件はANDで一緒に結合されます。要求されたクラスのPVと要求されたラベルのみがPVCにバインドされます。 + +{{< note >}} +現在、`selector`が空ではないPVCは、PVを動的にプロビジョニングできません。 +{{< /note >}} + +以前は、`storageClassName`属性の代わりに`volume.beta.kubernetes.io/storage-class`アノテーションが使用されていました。このアノテーションはまだ機能しています。ただし、今後のKubernetesリリースではサポートされません。 + +## ボリュームとしてのクレーム + +Podは、クレームをボリュームとして使用してストレージにアクセスします。クレームは、そのクレームを使用するPodと同じ名前空間に存在する必要があります。クラスターは、Podの名前空間でクレームを見つけ、それを使用してクレームを支援している`PersistentVolume`を取得します。次に、ボリュームがホストとPodにマウントされます。 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: myfrontend + image: nginx + volumeMounts: + - mountPath: "/var/www/html" + name: mypd + volumes: + - name: mypd + persistentVolumeClaim: + claimName: myclaim +``` + +### 名前空間に関する注意 + +`PersistentVolume`バインドは排他的であり、`PersistentVolumeClaim`は名前空間オブジェクトであるため、"多"モード(`ROX`、`RWX`)でクレームをマウントすることは1つの名前空間内でのみ可能です。 + +## Rawブロックボリュームのサポート + +{{< feature-state for_k8s_version="v1.13" state="beta" >}} + +次のボリュームプラグインは、必要に応じて動的プロビジョニングを含むrawブロックボリュームをサポートします。 + +* AWSElasticBlockStore +* AzureDisk +* FC (Fibre Channel) +* GCEPersistentDisk +* iSCSI +* Local volume +* RBD (Ceph Block Device) +* VsphereVolume (alpha) + +{{< note >}} +Kubernetes 1.9では、FCおよびiSCSIボリュームのみがrawブロックボリュームをサポートしていました。 +追加のプラグインのサポートは1.10で追加されました。 +{{< /note >}} + +### Rawブロックボリュームを使用した永続ボリューム + +```yaml +apiVersion: v1 +kind: PersistentVolume +metadata: + name: block-pv +spec: + capacity: + storage: 10Gi + accessModes: + - ReadWriteOnce + volumeMode: Block + persistentVolumeReclaimPolicy: Retain + fc: + targetWWNs: ["50060e801049cfd1"] + lun: 0 + readOnly: false +``` + +### Rawブロックボリュームを要求する永続ボリュームクレーム + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: block-pvc +spec: + accessModes: + - ReadWriteOnce + volumeMode: Block + resources: + requests: + storage: 10Gi +``` + +### コンテナにRawブロックデバイスパスを追加するPod仕様 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pod-with-block-volume +spec: + containers: + - name: fc-container + image: fedora:26 + command: ["/bin/sh", "-c"] + args: [ "tail -f /dev/null" ] + volumeDevices: + - name: data + devicePath: /dev/xvda + volumes: + - name: data + persistentVolumeClaim: + claimName: block-pvc +``` + +{{< note >}} +Podにrawブロックデバイスを追加する場合は、マウントパスの代わりにコンテナーでデバイスパスを指定します。 +{{< /note >}} + +### ブロックボリュームのバインド + +ユーザーが`PersistentVolumeClaim`specの`volumeMode`フィールドを使用してrawブロックボリュームの要求を示す場合、バインディングルールは、このモードをspecの一部として考慮しなかった以前のリリースとわずかに異なります。表にリストされているのは、ユーザーと管理者がrawブロックデバイスを要求するために指定可能な組み合わせの表です。この表は、ボリュームがバインドされるか、組み合わせが与えられないかを示します。静的にプロビジョニングされたボリュームのボリュームバインディングマトリクスはこちらです。 + +| PVボリュームモード | PVCボリュームモード | 結果 | +| -------------------|:-------------------:| ------------:| +| 未定義 | 未定義 | バインド | +| 未定義 | ブロック | バインドなし | +| 未定義 | ファイルシステム | バインド | +| ブロック | 未定義 | バインドなし | +| ブロック | ブロック | バインド | +| ブロック | ファイルシステム | バインドなし | +| ファイルシステム | ファイルシステム | バインド | +| ファイルシステム | ブロック | バインドなし | +| ファイルシステム | 未定義 | バインド | + +{{< note >}} +アルファリリースでは、静的にプロビジョニングされたボリュームのみがサポートされます。管理者は、rawブロックデバイスを使用する場合、これらの値を考慮するように注意する必要があります。 +{{< /note >}} + +## ボリュームのスナップショットとスナップショットからのボリュームの復元のサポート + +{{< feature-state for_k8s_version="v1.12" state="alpha" >}} + +ボリュームスナップショット機能は、CSIボリュームプラグインのみをサポートするために追加されました。詳細については、[ボリュームのスナップショット](/docs/concepts/storage/volume-snapshots/)を参照してください。 + +ボリュームスナップショットのデータソースからボリュームを復元する機能を有効にするには、apiserverおよびcontroller-managerで`VolumeSnapshotDataSource`フィーチャーゲートを有効にします。 + +### ボリュームスナップショットから永続ボリュームクレームを作成する + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: restore-pvc +spec: + storageClassName: csi-hostpath-sc + dataSource: + name: new-snapshot-test + kind: VolumeSnapshot + apiGroup: snapshot.storage.k8s.io + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` + +## ボリュームの複製 + +{{< feature-state for_k8s_version="v1.16" state="beta" >}} + +ボリュームの複製機能は、CSIボリュームプラグインのみをサポートするために追加されました。詳細については、[ボリュームの複製](/docs/concepts/storage/volume-pvc-datasource/)を参照してください。 + +PVCデータソースからのボリューム複製機能を有効にするには、apiserverおよびcontroller-managerで`VolumeSnapshotDataSource`フィーチャーゲートを有効にします。 + +### 既存のPVCからの永続ボリュームクレーム作成 + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: cloned-pvc +spec: + storageClassName: my-csi-plugin + dataSource: + name: existing-src-pvc-name + kind: PersistentVolumeClaim + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +``` + +## 可搬性の高い設定の作成 + +もし幅広いクラスターで実行され、永続ボリュームが必要となる構成テンプレートやサンプルを作成している場合は、次のパターンを使用することをお勧めします。 + +- 構成にPersistentVolumeClaimオブジェクトを含める(DeploymentやConfigMapと共に) +- ユーザーが設定をインスタンス化する際にPersistentVolumeを作成する権限がない場合があるため、設定にPersistentVolumeオブジェクトを含めない。 +- テンプレートをインスタンス化する時にストレージクラス名を指定する選択肢をユーザーに与える + - ユーザーがストレージクラス名を指定する場合、`persistentVolumeClaim.storageClassName`フィールドにその値を入力する。これにより、クラスターが管理者によって有効にされたストレージクラスを持っている場合、PVCは正しいストレージクラスと一致する。 + - ユーザーがストレージクラス名を指定しない場合、`persistentVolumeClaim.storageClassName`フィールドはnilのままにする。これにより、PVはユーザーにクラスターのデフォルトストレージクラスで自動的にプロビジョニングされる。多くのクラスター環境ではデフォルトのストレージクラスがインストールされているが、管理者は独自のデフォルトストレージクラスを作成することができる。 +- ツールがPVCを監視し、しばらくしてもバインドされないことをユーザーに表示する。これはクラスターが動的ストレージをサポートしない(この場合ユーザーは対応するPVを作成するべき)、もしくはクラスターがストレージシステムを持っていない(この場合ユーザーはPVCを必要とする設定をデプロイできない)可能性があることを示す。 + +{{% /capture %}} diff --git a/content/ja/docs/concepts/workloads/controllers/daemonset.md b/content/ja/docs/concepts/workloads/controllers/daemonset.md index 4f10338dacefd..1edf7636ce843 100644 --- a/content/ja/docs/concepts/workloads/controllers/daemonset.md +++ b/content/ja/docs/concepts/workloads/controllers/daemonset.md @@ -13,8 +13,7 @@ DaemonSetのいくつかの典型的な使用例は以下の通りです。 - `glusterd`や`ceph`のようなクラスターのストレージデーモンを各Node上で稼働させる。 - `fluentd`や`logstash`のようなログ集計デーモンを各Node上で稼働させる。 -- [Prometheus Node Exporter]( - https://github.com/prometheus/node_exporter)や`collectd`、[Dynatrace OneAgent](https://www.dynatrace.com/technologies/kubernetes-monitoring/)、 [AppDynamics Agent](https://docs.appdynamics.com/display/CLOUD/Container+Visibility+with+Kubernetes)、 [Datadog agent](https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/)、 [New Relic agent](https://docs.newrelic.com/docs/integrations/kubernetes-integration/installation/kubernetes-installation-configuration)、Gangliaの`gmond`やInstana agentなどのようなNodeのモニタリングデーモンを各Node上で稼働させる。 +- [Prometheus Node Exporter](https://github.com/prometheus/node_exporter)や[Flowmill](https://github.com/Flowmill/flowmill-k8s/)、[Sysdig Agent](https://docs.sysdig.com)、`collectd`、[Dynatrace OneAgent](https://www.dynatrace.com/technologies/kubernetes-monitoring/)、 [AppDynamics Agent](https://docs.appdynamics.com/display/CLOUD/Container+Visibility+with+Kubernetes)、 [Datadog agent](https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/)、 [New Relic agent](https://docs.newrelic.com/docs/integrations/kubernetes-integration/installation/kubernetes-installation-configuration)、Gangliaの`gmond`やInstana agentなどのようなNodeのモニタリングデーモンを各Node上で稼働させる。 シンプルなケースとして、各タイプのデーモンにおいて、全てのNodeをカバーする1つのDaemonSetが使用されるケースがあります。 さらに複雑な設定では、単一のタイプのデーモン用ですが、異なるフラグや、異なるハードウェアタイプに対するメモリー、CPUリクエストを要求する複数のDaemonSetを使用するケースもあります。 @@ -41,19 +40,19 @@ kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml ### 必須のフィールド 他の全てのKubernetesの設定と同様に、DaemonSetは`apiVersion`、`kind`と`metadata`フィールドが必須となります。 -設定ファイルの活用法に関する一般的な情報は、[アプリケーションのデプロイ](/docs/user-guide/deploying-applications/)、[コンテナの設定](/docs/tasks/)、[kuberctlを用いたオブジェクトの管理](/docs/concepts/overview/object-management-kubectl/overview/)といったドキュメントを参照ください。 +設定ファイルの活用法に関する一般的な情報は、[アプリケーションのデプロイ](/ja/docs/tasks/run-application/run-stateless-application-deployment/)、[コンテナの設定](/ja/docs/tasks/)、[kubectlを用いたオブジェクトの管理](/ja/docs/concepts/overview/working-with-objects/object-management/)といったドキュメントを参照ください。 -また、DaemonSetにおいて[`.spec`](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status)セクションも必須となります。 +また、DaemonSetにおいて[`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)セクションも必須となります。 ### Podテンプレート `.spec.template`は`.spec`内での必須のフィールドの1つです。 -`.spec.template`は[Podテンプレート](/docs/concepts/workloads/pods/pod-overview/#pod-templates)となります。これはフィールドがネストされていて、`apiVersion`や`kind`をもたないことを除いては、[Pod](/docs/concepts/workloads/pods/pod/)のテンプレートと同じスキーマとなります。 +`.spec.template`は[Podテンプレート](/ja/docs/concepts/workloads/pods/pod-overview/#podテンプレート)となります。これはフィールドがネストされていて、`apiVersion`や`kind`をもたないことを除いては、[Pod](/ja/docs/concepts/workloads/pods/pod/)のテンプレートと同じスキーマとなります。 Podに対する必須のフィールドに加えて、DaemonSet内のPodテンプレートは適切なラベルを指定しなくてはなりません([Podセレクター](#pod-selector)の項目を参照ください)。 -DaemonSet内のPodテンプレートでは、[`RestartPolicy`](/docs/user-guide/pod-states)フィールドを指定せずにデフォルトの`Always`を使用するか、明示的に`Always`を設定するかのどちらかである必要があります。 +DaemonSet内のPodテンプレートでは、[`RestartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)フィールドを指定せずにデフォルトの`Always`を使用するか、明示的に`Always`を設定するかのどちらかである必要があります。 ### Podセレクター @@ -70,30 +69,20 @@ Kubernetes1.8のように、ユーザーは`.spec.template`のラベルにマッ もし`spec.selector`が指定されたとき、`.spec.template.metadata.labels`とマッチしなければなりません。この2つの値がマッチしない設定をした場合、APIによってリジェクトされます。 -また、ユーザーは通常、DaemonSetやReplicaSetのような他のコントローラーを使用するか直接かによらず、このセレクターとマッチするラベルを持つPodを作成すべきではありません。 -さもないと、DaemonSetコントローラーが、それらのPodがDaemonSetによって作成されたものと扱われてしまいます。Kubernetesはユーザーがこれを行うことを止めることはありません。ユーザーがこれを行いたい1つのケースとしては、テストのためにNode上に異なる値をもったPodを手動で作成するような場合があります。 +また、ユーザーは通常、別のDaemonSetやReplicaSetなどの別のワークロードリソースを使用する場合であっても直接であっても、このセレクターマッチするラベルを持つPodを作成すべきではありません。さもないと、DaemonSet {{}}は、それらのPodが作成されたものとみなすためです。Kubernetesはこれを行うことを止めません。ユーザーがこれを行いたい1つのケースとしては、テスト用にノード上に異なる値を持つPodを手動で作成するような場合があります。 ### 特定のいくつかのNode上のみにPodを稼働させる もしユーザーが`.spec.template.spec.nodeSelector`を指定したとき、DaemonSetコントローラーは、その[node -selector](/docs/concepts/configuration/assign-pod-node/)にマッチするPodをNode上に作成します。 -同様に、もし`.spec.template.spec.affinity`を指定したとき、DaemonSetコントローラーは[node affinity](/docs/concepts/configuration/assign-pod-node/)マッチするPodをNode上に作成します。 +selector](/ja/docs/concepts/configuration/assign-pod-node/)にマッチするPodをNode上に作成します。 +同様に、もし`.spec.template.spec.affinity`を指定したとき、DaemonSetコントローラーは[node affinity](/ja/docs/concepts/configuration/assign-pod-node/)マッチするPodをNode上に作成します。 もしユーザーがどちらも指定しないとき、DaemonSetコントローラーは全てのNode上にPodを作成します。 ## Daemon Podがどのようにスケジューリングされるか -### DaemonSetコントローラーによってスケジューリングされる場合(Kubernetes1.12からデフォルトで無効) - -通常、Podが稼働するマシンはKubernetesスケジューラーによって選択されます。 -しかし、DaemonSetコントローラーによって作成されたPodは既に選択されたマシンを持っています(`.spec.nodeName`はPodの作成時に指定され、Kubernetesスケジューラーによって無視されます)。 -従って: - - - Nodeの[`unschedulable`](/docs/admin/node/#manual-node-administration)フィールドはDaemonSetコントローラーによって尊重されません。 - - DaemonSetコントローラーは、スケジューラーが起動していないときでも稼働でき、これはクラスターの自力での起動を助けます。 - ### デフォルトスケジューラーによってスケジューリングされる場合(Kubernetes1.12からデフォルトで有効) -{{< feature-state state="beta" for-kubernetes-version="1.12" >}} +{{< feature-state state="stable" for-kubernetes-version="1.17" >}} DaemonSetは全ての利用可能なNodeが単一のPodのコピーを稼働させることを保証します。通常、Podが稼働するNodeはKubernetesスケジューラーによって選択されます。しかし、DaemonSetのPodは代わりにDaemonSetコントローラーによって作成され、スケジューリングされます。 下記の問題について説明します: @@ -116,7 +105,7 @@ nodeAffinity: さらに、`node.kubernetes.io/unschedulable:NoSchedule`というtolarationがDaemonSetのPodに自動的に追加されます。デフォルトスケジューラーは、DaemonSetのPodのスケジューリングのときに、`unschedulable`なNodeを無視します。 -### TaintsとTolerations +### TaintsとTolerations DaemonSetのPodは[TaintsとTolerations](/docs/concepts/configuration/taint-and-toleration)の設定を尊重します。 下記のTolerationsは、関連する機能によって自動的にDaemonSetのPodに追加されます。 @@ -136,7 +125,7 @@ DaemonSet内のPodとのコミュニケーションをする際に考えられ - **Push**: DaemonSet内のPodは他のサービスに対して更新情報を送信するように設定されます。 - **NodeIPとKnown Port**: PodがNodeIPを介して疎通できるようにするため、DaemonSet内のPodは`hostPort`を使用できます。慣例により、クライアントはNodeIPのリストとポートを知っています。 -- **DNS**: 同じPodセレクターを持つ[HeadlessService](/docs/concepts/services-networking/service/#headless-services)を作成し、`endpoints`リソースを使ってDaemonSetを探すか、DNSから複数のAレコードを取得します。 +- **DNS**: 同じPodセレクターを持つ[HeadlessService](/ja/docs/concepts/services-networking/service/#headless-service)を作成し、`endpoints`リソースを使ってDaemonSetを探すか、DNSから複数のAレコードを取得します。 - **Service**: 同じPodセレクターを持つServiceを作成し、複数のうちのいずれかのNode上のDaemonに疎通させるためにそのServiceを使います。 ## DaemonSetの更新 @@ -145,11 +134,9 @@ DaemonSet内のPodとのコミュニケーションをする際に考えられ ユーザーはDaemonSetが作成したPodを修正可能です。しかし、Podは全てのフィールドの更新を許可していません。また、DaemonSetコントローラーは次のNode(同じ名前でも)が作成されたときにオリジナルのテンプレートを使ってPodを作成します。 -ユーザーはDaemonSetを削除可能です。もし`kubectl`コマンドで`--cascade=false`を指定したとき、DaemonSetのPodはNode上で残り続けます。そしてユーザーは異なるテンプレートを使って新しいDaemonSetを作成可能です。 -異なるテンプレートを使った新しいDaemonSetは、マッチしたラベルを持っている全ての存在しているPodを認識します。DaemonSetはPodのテンプレートがミスマッチしていたとしても、それらのPodを修正もしくは削除をしません。 -ユーザーはPodもしくはNodeの削除によって新しいPodの作成を強制する必要があります。 +ユーザーはDaemonSetを削除可能です。`kubectl`コマンドで`--cascade=false`を指定するとDaemonSetのPodはNode上に残り続けます。その後、同じセレクターで新しいDaemonSetを作成すると、新しいDaemonSetは既存のPodを再利用します。PodでDaemonSetを置き換える必要がある場合は、`updateStrategy`に従ってそれらを置き換えます。 -Kubernetes1.6とそれ以降のバージョンでは、ユーザーはDaemonSet上で[ローリングアップデートの実施](/docs/tasks/manage-daemon/update-daemon-set/)が可能です。 +ユーザーはDaemonSet上で[ローリングアップデートの実施](/docs/tasks/manage-daemon/update-daemon-set/)が可能です。 ## DaemonSetの代替案 @@ -172,7 +159,7 @@ DaemonSetと違い、静的Podはkubectlや他のKubernetes APIクライアン ### Deployment -DaemonSetは、Podの作成し、そのPodが停止されることのないプロセスを持つことにおいて[Deployment](/docs/concepts/workloads/controllers/deployment/)と同様です(例: webサーバー、ストレージサーバー)。 +DaemonSetは、Podの作成し、そのPodが停止されることのないプロセスを持つことにおいて[Deployment](/ja/docs/concepts/workloads/controllers/deployment/)と同様です(例: webサーバー、ストレージサーバー)。 フロントエンドのようなServiceのように、どのホスト上にPodが稼働するか制御するよりも、レプリカ数をスケールアップまたはスケールダウンしたりローリングアップデートする方が重要であるような、状態をもたないServiceに対してDeploymentを使ってください。 Podのコピーが全てまたは特定のホスト上で常に稼働していることが重要な場合や、他のPodの前に起動させる必要があるときにDaemonSetを使ってください。 diff --git a/content/ja/docs/concepts/workloads/controllers/deployment.md b/content/ja/docs/concepts/workloads/controllers/deployment.md index 1d465cb70dafb..3146606573960 100644 --- a/content/ja/docs/concepts/workloads/controllers/deployment.md +++ b/content/ja/docs/concepts/workloads/controllers/deployment.md @@ -11,7 +11,7 @@ weight: 30 {{% capture overview %}} -_Deployment_ コントローラーは[Pod](/docs/concepts/workloads/pods/pod/)と[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)の宣言的なアップデート機能を提供します。 +_Deployment_ コントローラーは[Pod](/ja/docs/concepts/workloads/pods/pod/)と[ReplicaSet](/ja/docs/concepts/workloads/controllers/replicaset/)の宣言的なアップデート機能を提供します。 ユーザーはDeploymentにおいて_理想的な状態_ を定義し、Deploymentコントローラーは指定された頻度で現在の状態を理想的な状態に変更させます。ユーザーはDeploymentを定義して、新しいReplicaSetを作成したり、既存のDeploymentを削除して新しいDeploymentで全てのリソースを適用できます。 @@ -28,7 +28,7 @@ Deploymentによって作成されたReplicaSetを管理しないでください 下記の項目はDeploymentの典型的なユースケースです。 -* ReplicaSetをロールアウトするために[Deploymentの作成](#creating-a-deployment)を行う: ReplicaSetはバックグラウンドでPodを作成します。Podの作成が完了したかどうかは、ロールアウトのステータスを確認してください。 +* ReplicaSetをロールアウトするために[Deploymentの作成](#creating-a-deployment)を行う: ReplicaSetはバックグラウンドでPodを作成します。Podの作成が完了したかどうかは、ロールアウトのステータスを確認してください。 * DeploymentのPodTemplateSpecを更新することにより[Podの新しい状態を宣言する](#updating-a-deployment): 新しいReplicaSetが作成され、Deploymentは指定された頻度で古いReplicaSetから新しいReplicaSetへのPodの移行を管理します。新しいReplicaSetはDeploymentのリビジョンを更新します。 * Deploymentの現在の状態が不安定な場合、[Deploymentのロールバック](#rolling-back-a-deployment)をする: ロールバックによる各更新作業は、Deploymentのリビジョンを更新します。 * より多くの負荷をさばけるように、[Deploymentをスケールアップ](#scaling-a-deployment)する @@ -52,7 +52,7 @@ Deploymentによって作成されたReplicaSetを管理しないでください {{< /note >}} * `template`フィールドは、下記のサブフィールドを持ちます。: * Podは`labels`フィールドによって指定された`app: nginx`というラベルがつけられる - * PodTemplateの仕様もしくは、`.template.spec`フィールドは、このPodは`nginx`という名前のコンテナーを1つ稼働させ、それは`nginx`というさせ、[Docker Hub](https://hub.docker.com/)にある`nginx`のバージョン1.7.9を使うことを示します + * PodTemplateの仕様もしくは、`.template.spec`フィールドは、このPodは`nginx`という名前のコンテナーを1つ稼働させ、それは`nginx`というさせ、[Docker Hub](https://hub.docker.com/)にある`nginx`のバージョン1.14.2を使うことを示します * 1つのコンテナを作成し、`name`フィールドを使って`nginx`という名前をつけます 上記のDeploymentを作成するために、以下に示すステップにしたがってください。 @@ -136,10 +136,10 @@ Deploymentのロールアウトは、DeploymentのPodテンプレート(この Deploymentを更新するには下記のステップに従ってください。 -1. nginxのPodで、`nginx:1.7.9`イメージの代わりに`nginx:1.9.1`を使うように更新します。 +1. nginxのPodで、`nginx:1.14.2`イメージの代わりに`nginx:1.16.1`を使うように更新します。 ```shell - kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 + kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 ``` 実行結果は下記のとおりです。 @@ -147,7 +147,7 @@ Deploymentを更新するには下記のステップに従ってください。 deployment.apps/nginx-deployment image updated ``` - また、Deploymentを`編集`して、`.spec.template.spec.containers[0].image`を`nginx:1.7.9`から`nginx:1.9.1`に変更することができます。 + また、Deploymentを`編集`して、`.spec.template.spec.containers[0].image`を`nginx:1.14.2`から`nginx:1.16.1`に変更することができます。 ```shell kubectl edit deployment.v1.apps/nginx-deployment @@ -237,7 +237,7 @@ Deploymentを更新するには下記のステップに従ってください。 Labels: app=nginx Containers: nginx: - Image: nginx:1.9.1 + Image: nginx:1.16.1 Port: 80/TCP Environment: Mounts: @@ -268,7 +268,7 @@ Deploymentコントローラーにより、新しいDeploymentが観測される Deploymentのロールアウトが進行中にDeploymentを更新すると、Deploymentは更新する毎に新しいReplicaSetを作成してスケールアップさせ、以前にスケールアップしたReplicaSetのロールオーバーを行います。Deploymentは更新前のReplicaSetを古いReplicaSetのリストに追加し、スケールダウンを開始します。 -例えば、5つのレプリカを持つ`nginx:1.7.9`のDeploymentを作成し、`nginx:1.7.9`の3つのレプリカが作成されているときに5つのレプリカを持つ`nginx:1.9.1`に更新します。このケースではDeploymentは作成済みの`nginx:1.7.9`の3つのPodをすぐに削除し、`nginx:1.9.1`のPodの作成を開始します。`nginx:1.7.9`の5つのレプリカを全て作成するのを待つことはありません。 +例えば、5つのレプリカを持つ`nginx:1.14.2`のDeploymentを作成し、`nginx:1.14.2`の3つのレプリカが作成されているときに5つのレプリカを持つ`nginx:1.16.1`に更新します。このケースではDeploymentは作成済みの`nginx:1.14.2`の3つのPodをすぐに削除し、`nginx:1.16.1`のPodの作成を開始します。`nginx:1.14.2`の5つのレプリカを全て作成するのを待つことはありません。 ### ラベルセレクターの更新 @@ -290,10 +290,10 @@ Deploymentのロールバックを行いたい場合があります。例えば Deploymentのリビジョンは、Deploymentのロールアウトがトリガーされた時に作成されます。これはDeploymentのPodテンプレート(`.spec.template`)が変更されたときのみ新しいリビジョンが作成されることを意味します。Deploymentのスケーリングなど、他の種類の更新においてはDeploymentのリビジョンは作成されません。これは手動もしくはオートスケーリングを同時に行うことができるようにするためです。これは過去のリビジョンにロールバックするとき、DeploymentのPodテンプレートの箇所のみロールバックされることを意味します。 {{< /note >}} -* `nginx:1.9.1`の代わりに`nginx:1.91`というイメージに更新して、Deploymentの更新中にタイプミスをしたと仮定します。 +* `nginx:1.16.1`の代わりに`nginx:1.161`というイメージに更新して、Deploymentの更新中にタイプミスをしたと仮定します。 ```shell - kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.161 --record=true ``` 実行結果は下記のとおりです。 @@ -367,7 +367,7 @@ Deploymentのリビジョンは、Deploymentのロールアウトがトリガー Labels: app=nginx Containers: nginx: - Image: nginx:1.91 + Image: nginx:1.161 Port: 80/TCP Host Port: 0/TCP Environment: @@ -408,13 +408,13 @@ Deploymentのリビジョンは、Deploymentのロールアウトがトリガー deployments "nginx-deployment" REVISION CHANGE-CAUSE 1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml --record=true - 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true - 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true + 2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true + 3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.161 --record=true ``` `CHANGE-CAUSE`はリビジョンの作成時にDeploymentの`kubernetes.io/change-cause`アノテーションからリビジョンにコピーされます。下記の手段により`CHANGE-CAUSE`メッセージを指定できます。 - * `kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.9.1"`の実行によりアノテーションを追加する。 + * `kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.16.1"`の実行によりアノテーションを追加する。 * リソースの変更時に`kubectl`コマンドの内容を記録するために`--record`フラグを追加する。 * リソースのマニフェストを手動で編集する。 @@ -428,10 +428,10 @@ Deploymentのリビジョンは、Deploymentのロールアウトがトリガー deployments "nginx-deployment" revision 2 Labels: app=nginx pod-template-hash=1159050644 - Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true Containers: nginx: - Image: nginx:1.9.1 + Image: nginx:1.16.1 Port: 80/TCP QoS Tier: cpu: BestEffort @@ -488,7 +488,7 @@ Deploymentのリビジョンは、Deploymentのロールアウトがトリガー CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500 Labels: app=nginx Annotations: deployment.kubernetes.io/revision=4 - kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true + kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 --record=true Selector: app=nginx Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate @@ -498,7 +498,7 @@ Deploymentのリビジョンは、Deploymentのロールアウトがトリガー Labels: app=nginx Containers: nginx: - Image: nginx:1.9.1 + Image: nginx:1.16.1 Port: 80/TCP Host Port: 0/TCP Environment: @@ -647,7 +647,7 @@ Deploymentのローリングアップデートは、同時に複数のバージ * 次にDeploymentのイメージを更新します。 ```shell - kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 ``` 実行結果は下記のとおりです。 @@ -915,7 +915,7 @@ Deploymentは[`.spec`セクション](https://git.k8s.io/community/contributors/ `.spec.template`と`.spec.selector`は`.spec`における必須のフィールドです。 -`.spec.template`は[Podテンプレート](/docs/concepts/workloads/pods/pod-overview/#pod-templates)です。これは.spec内でネストされていないことと、`apiVersion`や`kind`を持たないことを除いては[Pod](/docs/concepts/workloads/pods/pod/)と同じスキーマとなります。 +`.spec.template`は[Podテンプレート](/ja/docs/concepts/workloads/pods/pod-overview/#podテンプレート)です。これは.spec内でネストされていないことと、`apiVersion`や`kind`を持たないことを除いては[Pod](/ja/docs/concepts/workloads/pods/pod/)と同じスキーマとなります。 Podの必須フィールドに加えて、Deployment内のPodテンプレートでは適切なラベルと再起動ポリシーを設定しなくてはなりません。ラベルは他のコントローラーと重複しないようにしてください。ラベルについては、[セレクター](#selector)を参照してください。 diff --git a/content/ja/docs/concepts/workloads/controllers/replicaset.md b/content/ja/docs/concepts/workloads/controllers/replicaset.md index 0c4a47f69463f..3c20e295e651c 100644 --- a/content/ja/docs/concepts/workloads/controllers/replicaset.md +++ b/content/ja/docs/concepts/workloads/controllers/replicaset.md @@ -192,11 +192,11 @@ pod2 1/1 Running 0 13s ReplicaSetでは、`kind`フィールドの値は`ReplicaSet`です。 Kubernetes1.9において、ReplicaSetは`apps/v1`というAPIバージョンが現在のバージョンで、デフォルトで有効です。`apps/v1beta2`というAPIバージョンは廃止されています。先ほど作成した`frontend.yaml`ファイルの最初の行を参考にしてください。 -また、ReplicaSetは[`.spec` セクション](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status)も必須です。 +また、ReplicaSetは[`.spec` セクション](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)も必須です。 ### Pod テンプレート -`.spec.template`はラベルを持つことが必要な[Pod テンプレート](/docs/concepts/workloads/Pods/pod-overview/#pod-templates) です。先ほど作成した`frontend.yaml`の例では、`tier: frontend`というラベルを1つ持っています。 +`.spec.template`はラベルを持つことが必要な[Pod テンプレート](/ja/docs/concepts/workloads/pods/pod-overview/#podテンプレート) です。先ほど作成した`frontend.yaml`の例では、`tier: frontend`というラベルを1つ持っています。 他のコントローラーがこのPodを所有しようとしないためにも、他のコントローラーのセレクターでラベルを上書きしないように注意してください。 テンプレートの[再起動ポリシー](/docs/concepts/workloads/Pods/pod-lifecycle/#restart-policy)のためのフィールドである`.spec.template.spec.restartPolicy`は`Always`のみ許可されていて、そしてそれがデフォルト値です。 @@ -287,7 +287,7 @@ kubectl autoscale rs frontend --max=10 ### Deployment (推奨) -[`Deployment`](/docs/concepts/workloads/controllers/deployment/)はReplicaSetを所有することのできるオブジェクトで、宣言的なサーバサイドのローリングアップデートを介してReplicaSetとPodをアップデートできます。 +[`Deployment`](/ja/docs/concepts/workloads/controllers/deployment/)はReplicaSetを所有することのできるオブジェクトで、宣言的なサーバサイドのローリングアップデートを介してReplicaSetとPodをアップデートできます。 ReplicaSetは単独で使用可能ですが、現在では、ReplicaSetは主にPodの作成、削除とアップデートを司るためのメカニズムとしてDeploymentによって使用されています。ユーザーがDeploymentを使用するとき、Deploymentによって作成されるReplicaSetの管理について心配する必要はありません。DeploymentはReplicaSetを所有し、管理します。 このため、もしユーザーがReplicaSetを必要とするとき、Deploymentの使用を推奨します。 @@ -303,7 +303,7 @@ PodをPodそれ自身で停止させたいような場合(例えば、バッチ ### DaemonSet -マシンの監視やロギングなど、マシンレベルの機能を提供したい場合は、ReplicaSetの代わりに[`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/)を使用してください。 +マシンの監視やロギングなど、マシンレベルの機能を提供したい場合は、ReplicaSetの代わりに[`DaemonSet`](/ja/docs/concepts/workloads/controllers/daemonset/)を使用してください。 これらのPodはマシン自体のライフタイムに紐づいています: そのPodは他のPodが起動する前に、そのマシン上で稼働される必要があり、マシンが再起動またはシャットダウンされるときには、安全に停止されます。 ### ReplicationController diff --git a/content/ja/docs/concepts/workloads/controllers/statefulset.md b/content/ja/docs/concepts/workloads/controllers/statefulset.md index 06fb5d03ecd17..e16488b5b882c 100644 --- a/content/ja/docs/concepts/workloads/controllers/statefulset.md +++ b/content/ja/docs/concepts/workloads/controllers/statefulset.md @@ -29,14 +29,14 @@ StatefulSetは下記の1つ以上の項目を要求するアプリケーショ 上記において安定とは、Podのスケジュール(または再スケジュール)をまたいでも永続的であることと同義です。 もしアプリケーションが安定したネットワーク識別子と規則的なデプロイや削除、スケーリングを全く要求しない場合、ユーザーはステートレスなレプリカのセットを提供するコントローラーを使ってアプリケーションをデプロイするべきです。 -[Deployment](/docs/concepts/workloads/controllers/deployment/)や[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)のようなコントローラーはこのようなステートレスな要求に対して最適です。 +[Deployment](/ja/docs/concepts/workloads/controllers/deployment/)や[ReplicaSet](/ja/docs/concepts/workloads/controllers/replicaset/)のようなコントローラーはこのようなステートレスな要求に対して最適です。 ## 制限事項 * StatefuleSetはKubernetes1.9より以前のバージョンではβ版のリソースであり、1.5より前のバージョンでは利用できません。 * 提供されたPodのストレージは、要求された`storage class`にもとづいて[PersistentVolume Provisioner](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/README.md)によってプロビジョンされるか、管理者によって事前にプロビジョンされなくてはなりません。 * StatefulSetの削除もしくはスケールダウンをすることにより、StatefulSetに関連したボリュームは削除*されません* 。 これはデータ安全性のためで、関連するStatefulSetのリソース全てを自動的に削除するよりもたいてい有効です。 -* StatefulSetは現在、Podのネットワークアイデンティティーに責務をもつために[Headless Service](/docs/concepts/services-networking/service/#headless-services)を要求します。ユーザーはこのServiceを作成する責任があります。 +* StatefulSetは現在、Podのネットワークアイデンティティーに責務をもつために[Headless Service](/ja/docs/concepts/services-networking/service/#headless-service)を要求します。ユーザーはこのServiceを作成する責任があります。 * StatefulSetは、StatefulSetが削除されたときにPodの停止を行うことを保証していません。StatefulSetにおいて、規則的で安全なPodの停止を行う場合、削除のために事前にそのStatefulSetの数を0にスケールダウンさせることが可能です。 * デフォルト設定の[Pod管理ポリシー](#pod-management-policies) (`OrderedReady`)によって[ローリングアップデート](#rolling-updates)を行う場合、[修復のための手動介入](#forced-rollback)を要求するようなブロークンな状態に遷移させることが可能です。 @@ -116,11 +116,11 @@ N個のレプリカをもったStatefulSetにおいて、StatefulSet内の各Pod StatefulSet内の各Podは、そのStatefulSet名とPodの順序番号から派生してホストネームが割り当てられます。 作成されたホストネームの形式は`$(StatefulSet名)-$(順序番号)`となります。先ほどの上記の例では、`web-0,web-1,web-2`という3つのPodが作成されます。 -StatefulSetは、Podのドメインをコントロールするために[Headless Service](/docs/concepts/services-networking/service/#headless-services)を使うことができます。 +StatefulSetは、Podのドメインをコントロールするために[Headless Service](/ja/docs/concepts/services-networking/service/#headless-service)を使うことができます。 このHeadless Serviceによって管理されたドメインは`$(Service名).$(ネームスペース).svc.cluster.local`形式となり、"cluster.local"というのはそのクラスターのドメインとなります。 各Podが作成されると、Podは`$(Pod名).$(管理するServiceドメイン名)`に一致するDNSサブドメインを取得し、管理するServiceはStatefulSetの`serviceName`で定義されます。 -[制限事項](#制限事項)セクションで言及したように、ユーザーはPodのネットワークアイデンティティーのために[Headless Service](/docs/concepts/services-networking/service/#headless-services)を作成する責任があります。 +[制限事項](#制限事項)セクションで言及したように、ユーザーはPodのネットワークアイデンティティーのために[Headless Service](/ja/docs/concepts/services-networking/service/#headless-service)を作成する責任があります。 ここで、クラスタードメイン、Service名、StatefulSet名の選択と、それらがStatefulSetのPodのDNS名にどう影響するかの例をあげます。 diff --git a/content/ja/docs/concepts/workloads/controllers/ttlafterfinished.md b/content/ja/docs/concepts/workloads/controllers/ttlafterfinished.md index ee1dea77ebef7..3c28fe25ea176 100644 --- a/content/ja/docs/concepts/workloads/controllers/ttlafterfinished.md +++ b/content/ja/docs/concepts/workloads/controllers/ttlafterfinished.md @@ -26,7 +26,7 @@ TTLコントローラーは現在[Job](/docs/concepts/workloads/controllers/jobs TTLコントローラーは現在Jobに対してのみサポートされています。クラスターオペレーターはこの[例](/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically)のように、Jobの`.spec.ttlSecondsAfterFinished`フィールドを指定することにより、終了したJob(`完了した`もしくは`失敗した`)を自動的に削除するためにこの機能を使うことができます。 TTLコントローラーは、そのリソースが終了したあと指定したTTLの秒数後に削除できるか推定します。言い換えると、そのTTLが期限切れになると、TTLコントローラーがリソースをクリーンアップするときに、そのリソースに紐づく従属オブジェクトも一緒に連続で削除します。注意点として、リソースが削除されるとき、ファイナライザーのようなライフサイクルに関する保証は尊重されます。 -TTL秒はいつでもセット可能です。下記はJobの`.spec.ttlSecondsAfterFinished`フィールドのセットに関するいくつかの例です。 +TTL秒はいつでもセット可能です。下記はJobの`.spec.ttlSecondsAfterFinished`フィールドのセットに関するいくつかの例です。 * Jobがその終了後にいくつか時間がたった後に自動的にクリーンアップできるように、そのリソースマニフェストにこの値を指定します。 * この新しい機能を適用させるために、存在していて既に終了したリソースに対してこのフィールドをセットします。 diff --git a/content/ja/docs/concepts/workloads/pods/init-containers.md b/content/ja/docs/concepts/workloads/pods/init-containers.md index 8ba075b32cf61..0f25a656b2c3d 100644 --- a/content/ja/docs/concepts/workloads/pods/init-containers.md +++ b/content/ja/docs/concepts/workloads/pods/init-containers.md @@ -13,7 +13,7 @@ weight: 40 {{% capture body %}} ## Initコンテナを理解する -単一の[Pod](/docs/concepts/workloads/pods/pod-overview/)は、Pod内に複数のコンテナを稼働させることができますが、Initコンテナもまた、アプリケーションコンテナが稼働する前に1つまたは複数稼働できます。 +単一の[Pod](/ja/docs/concepts/workloads/pods/pod-overview/)は、Pod内に複数のコンテナを稼働させることができますが、Initコンテナもまた、アプリケーションコンテナが稼働する前に1つまたは複数稼働できます。 Initコンテナは下記の項目をのぞいて、通常のコンテナと全く同じものとなります。 @@ -57,7 +57,7 @@ Initコンテナはアプリケーションコンテナのイメージとは分 * ボリュームにあるgitリポジトリをクローンします。 * メインのアプリケーションコンテナのための設定ファイルを動的に生成するために、いくつかの値を設定ファイルに移してテンプレートツールを稼働させます。例えば、設定ファイルにそのPodのPOD_IPを移して、Jinjaを使ってメインのアプリケーションコンテナの設定ファイルを生成します。 -さらに詳細な使用例は、[StatefulSetsのドキュメント](/docs/concepts/workloads/controllers/statefulset/)と[Production Pods guide](/docs/tasks/configure-pod-container/configure-pod-initialization/)にまとまっています。 +さらに詳細な使用例は、[StatefulSetsのドキュメント](/ja/docs/concepts/workloads/controllers/statefulset/)と[Production Pods guide](/docs/tasks/configure-pod-container/configure-pod-initialization/)にまとまっています。 ### Initコンテナの使用 diff --git a/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md b/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md index 42fc0a86d3fc0..07437b9847bf5 100644 --- a/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md @@ -223,9 +223,9 @@ kubeletによって再起動される終了したコンテナは、5分後にキ - バッチ計算などのように終了が予想されるPodに対しては、[Job](/docs/concepts/jobs/run-to-completion-finite-workloads/)を使用します。 Jobは`restartPolicy`がOnFailureまたはNeverになるPodに対してのみ適切です。 -- 停止することを期待しないPod(たとえばWebサーバーなど)には、[ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/)、[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)、または[Deployment](/docs/concepts/workloads/controllers/deployment/)を使用します。ReplicationControllerは`restartPolicy`がAlwaysのPodに対してのみ適切です。 +- 停止することを期待しないPod(たとえばWebサーバーなど)には、[ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/)、[ReplicaSet](/ja/docs/concepts/workloads/controllers/replicaset/)、または[Deployment](/ja/docs/concepts/workloads/controllers/deployment/)を使用します。ReplicationControllerは`restartPolicy`がAlwaysのPodに対してのみ適切です。 -- マシン固有のシステムサービスを提供するため、マシンごとに1つずつ実行する必要があるPodには[DaemonSet](/docs/concepts/workloads/controllers/daemonset/)を使用します。 +- マシン固有のシステムサービスを提供するため、マシンごとに1つずつ実行する必要があるPodには[DaemonSet](/ja/docs/concepts/workloads/controllers/daemonset/)を使用します。 3種類のコントローラにはすべてPodTemplateが含まれます。 Podを自分で直接作成するのではなく適切なコントローラを作成してPodを作成させることをおすすめします。 diff --git a/content/ja/docs/concepts/workloads/pods/pod-overview.md b/content/ja/docs/concepts/workloads/pods/pod-overview.md index f1f48a57b64c4..44388337c787a 100644 --- a/content/ja/docs/concepts/workloads/pods/pod-overview.md +++ b/content/ja/docs/concepts/workloads/pods/pod-overview.md @@ -1,6 +1,4 @@ --- -reviewers: -- erictune title: Podについての概観(Pod Overview) content_template: templates/concept weight: 10 @@ -80,16 +78,16 @@ Podは、Podそれ自体によって自己修復しません。もし、稼働 1つまたはそれ以上のPodを含むコントローラーの例は下記の通りです。 -* [Deployment](/docs/concepts/workloads/controllers/deployment/) -* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) -* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) +* [Deployment](/ja/docs/concepts/workloads/controllers/deployment/) +* [StatefulSet](/ja/docs/concepts/workloads/controllers/statefulset/) +* [DaemonSet](/ja/docs/concepts/workloads/controllers/daemonset/) 通常は、コントローラーはユーザーが作成したPodテンプレートを使用して、担当するPodを作成します。 ## Podテンプレート Podテンプレートは、[ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/)、 [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/)や、 -[DaemonSet](/docs/concepts/workloads/controllers/daemonset/)のような他のオブジェクト内で含まれるPodの仕様となります。 +[DaemonSet](/ja/docs/concepts/workloads/controllers/daemonset/)のような他のオブジェクト内で含まれるPodの仕様となります。 コントローラーは実際のPodを作成するためにPodテンプレートを使用します。 下記のサンプルは、メッセージを表示する単一のコンテナを含んだ、シンプルなPodのマニフェストとなります。 diff --git a/content/ja/docs/concepts/workloads/pods/pod.md b/content/ja/docs/concepts/workloads/pods/pod.md index 3987e9f842ed0..48be657bc818e 100644 --- a/content/ja/docs/concepts/workloads/pods/pod.md +++ b/content/ja/docs/concepts/workloads/pods/pod.md @@ -110,9 +110,9 @@ Podは、耐久性のある存在として扱われることを意図してい リソースの不足やNodeのメンテナンスといった場合に、追い出されて停止することもあり得ます。 一般に、ユーザーはPodを直接作成する必要はありません。 -ほとんどの場合、対象がシングルトンであったとしても、[Deployments](/docs/concepts/workloads/controllers/deployment/)などのコントローラーを使用するべきです。 +ほとんどの場合、対象がシングルトンであったとしても、[Deployments](/ja/docs/concepts/workloads/controllers/deployment/)などのコントローラーを使用するべきです。 コントローラーは、レプリケーションとロールアウト管理だけでなく、クラスターレベルの自己修復機能も提供します。 -[StatefulSet](/docs/concepts/workloads/controllers/statefulset.md)ようなコントローラーもステートフルなPodをサポートします。 +[StatefulSet](/ja/docs/concepts/workloads/controllers/statefulset.md)ようなコントローラーもステートフルなPodをサポートします。 主要なユーザー向けのプリミティブとして集合APIを使用することは、[Borg](https://research.google.com/pubs/pub43438.html)、 [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)、[Aurora](http://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)、[Tupperware](http://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997)などのクラスタースケジューリングシステムで比較的一般的です。 @@ -151,7 +151,7 @@ PodはAPIから消え、クライアントからは見えなくなる デフォルトでは、すべての削除は30秒以内に正常に行われます。 `kubectl delete` コマンドは、ユーザーがデフォルト値を上書きして独自の値を指定できるようにする `--grace-period=` オプションをサポートします。 -値 `0` はPodを[強制的に削除](/docs/concepts/workloads/pods/pod/#force-deletion-of-pods)します。 +値 `0` はPodを[強制的に削除](/ja/docs/concepts/workloads/pods/pod/#podの強制削除)します。 kubectlのバージョン1.5以降では、強制削除を実行するために `--grace-period=0` と共に `--force` というフラグを追加で指定する必要があります。 ### Podの強制削除 diff --git a/content/ja/docs/reference/_index.md b/content/ja/docs/reference/_index.md index d5f8120dbdb48..7cbe46514ba6e 100644 --- a/content/ja/docs/reference/_index.md +++ b/content/ja/docs/reference/_index.md @@ -18,11 +18,11 @@ content_template: templates/concept * [Kubernetes API概要](/docs/reference/using-api/api-overview/) - Kubernetes APIの概要です。 * Kubernetes APIバージョン + * [1.17](/docs/reference/generated/kubernetes-api/v1.17/) * [1.16](/docs/reference/generated/kubernetes-api/v1.16/) * [1.15](/docs/reference/generated/kubernetes-api/v1.15/) * [1.14](/docs/reference/generated/kubernetes-api/v1.14/) * [1.13](/docs/reference/generated/kubernetes-api/v1.13/) - * [1.12](/docs/reference/generated/kubernetes-api/v1.12/) ## APIクライアントライブラリー diff --git a/content/ja/docs/reference/command-line-tools-reference/feature-gates.md b/content/ja/docs/reference/command-line-tools-reference/feature-gates.md index b20bbcfd23cbf..582d432e94cea 100644 --- a/content/ja/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/ja/docs/reference/command-line-tools-reference/feature-gates.md @@ -37,10 +37,9 @@ content_template: templates/concept |---------|---------|-------|-------|-------| | `APIListChunking` | `false` | Alpha | 1.8 | 1.8 | | `APIListChunking` | `true` | Beta | 1.9 | | +| `APIPriorityAndFairness` | `false` | Alpha | 1.17 | | | `APIResponseCompression` | `false` | Alpha | 1.7 | | | `AppArmor` | `true` | Beta | 1.4 | | -| `AttachVolumeLimit` | `true` | Alpha | 1.11 | 1.11 | -| `AttachVolumeLimit` | `true` | Beta | 1.12 | | | `BalanceAttachedNodeVolumes` | `false` | Alpha | 1.11 | | | `BlockVolume` | `false` | Alpha | 1.9 | 1.12 | | `BlockVolume` | `true` | Beta | 1.13 | - | @@ -55,14 +54,20 @@ content_template: templates/concept | `CSIDriverRegistry` | `true` | Beta | 1.14 | | | `CSIInlineVolume` | `false` | Alpha | 1.15 | 1.15 | | `CSIInlineVolume` | `true` | Beta | 1.16 | - | -| `CSIMigration` | `false` | Alpha | 1.14 | | +| `CSIMigration` | `false` | Alpha | 1.14 | 1.16 | +| `CSIMigration` | `true` | Beta | 1.17 | | | `CSIMigrationAWS` | `false` | Alpha | 1.14 | | +| `CSIMigrationAWS` | `false` | Beta | 1.17 | | +| `CSIMigrationAWSComplete` | `false` | Alpha | 1.17 | | | `CSIMigrationAzureDisk` | `false` | Alpha | 1.15 | | +| `CSIMigrationAzureDiskComplete` | `false` | Alpha | 1.17 | | | `CSIMigrationAzureFile` | `false` | Alpha | 1.15 | | -| `CSIMigrationGCE` | `false` | Alpha | 1.14 | | +| `CSIMigrationAzureFileComplete` | `false` | Alpha | 1.17 | | +| `CSIMigrationGCE` | `false` | Alpha | 1.14 | 1.16 | +| `CSIMigrationGCE` | `false` | Beta | 1.17 | | +| `CSIMigrationGCEComplete` | `false` | Alpha | 1.17 | | | `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | | -| `CSINodeInfo` | `false` | Alpha | 1.12 | 1.13 | -| `CSINodeInfo` | `true` | Beta | 1.14 | | +| `CSIMigrationOpenStackComplete` | `false` | Alpha | 1.17 | | | `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | | | `CustomResourceDefaulting` | `false` | Alpha| 1.15 | 1.15 | | `CustomResourceDefaulting` | `true` | Beta | 1.16 | | @@ -73,7 +78,8 @@ content_template: templates/concept | `DynamicAuditing` | `false` | Alpha | 1.13 | | | `DynamicKubeletConfig` | `false` | Alpha | 1.4 | 1.10 | | `DynamicKubeletConfig` | `true` | Beta | 1.11 | | -| `EndpointSlice` | `false` | Alpha | 1.16 | | +| `EndpointSlice` | `false` | Alpha | 1.16 | 1.16 | +| `EndpointSlice` | `false` | Beta | 1.17 | | | `EphemeralContainers` | `false` | Alpha | 1.16 | | | `ExpandCSIVolumes` | `false` | Alpha | 1.14 | 1.15 | | `ExpandCSIVolumes` | `true` | Beta | 1.16 | | @@ -93,33 +99,23 @@ content_template: templates/concept | `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | | | `MountContainers` | `false` | Alpha | 1.9 | | | `NodeDisruptionExclusion` | `false` | Alpha | 1.16 | | -| `NodeLease` | `false` | Alpha | 1.12 | 1.13 | -| `NodeLease` | `true` | Beta | 1.14 | | | `NonPreemptingPriority` | `false` | Alpha | 1.15 | | | `PodOverhead` | `false` | Alpha | 1.16 | - | -| `PodShareProcessNamespace` | `false` | Alpha | 1.10 | 1.11 | -| `PodShareProcessNamespace` | `true` | Beta | 1.12 | | | `ProcMountType` | `false` | Alpha | 1.12 | | | `QOSReserved` | `false` | Alpha | 1.11 | | | `RemainingItemCount` | `false` | Alpha | 1.15 | | -| `RequestManagement` | `false` | Alpha | 1.15 | | | `ResourceLimitsPriorityFunction` | `false` | Alpha | 1.9 | | -| `ResourceQuotaScopeSelectors` | `false` | Alpha | 1.11 | 1.11 | -| `ResourceQuotaScopeSelectors` | `true` | Beta | 1.12 | | | `RotateKubeletClientCertificate` | `true` | Beta | 1.8 | | | `RotateKubeletServerCertificate` | `false` | Alpha | 1.7 | 1.11 | | `RotateKubeletServerCertificate` | `true` | Beta | 1.12 | | | `RunAsGroup` | `true` | Beta | 1.14 | | | `RuntimeClass` | `false` | Alpha | 1.12 | 1.13 | | `RuntimeClass` | `true` | Beta | 1.14 | | -| `ScheduleDaemonSetPods` | `false` | Alpha | 1.11 | 1.11 | -| `ScheduleDaemonSetPods` | `true` | Beta | 1.12 | | | `SCTPSupport` | `false` | Alpha | 1.12 | | | `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 | | `ServerSideApply` | `true` | Beta | 1.16 | | -| `ServiceLoadBalancerFinalizer` | `false` | Alpha | 1.15 | | | `ServiceNodeExclusion` | `false` | Alpha | 1.8 | | -| `StartupProbe` | `false` | Alpha | 1.16 | | +| `StartupProbe` | `true` | Beta | 1.17 | | | `StorageVersionHash` | `false` | Alpha | 1.14 | 1.14 | | `StorageVersionHash` | `true` | Beta | 1.15 | | | `StreamingProxyRedirects` | `false` | Beta | 1.5 | 1.5 | @@ -131,8 +127,6 @@ content_template: templates/concept | `Sysctls` | `true` | Beta | 1.11 | | | `TaintBasedEvictions` | `false` | Alpha | 1.6 | 1.12 | | `TaintBasedEvictions` | `true` | Beta | 1.13 | | -| `TaintNodesByCondition` | `false` | Alpha | 1.8 | 1.11 | -| `TaintNodesByCondition` | `true` | Beta | 1.12 | | | `TokenRequest` | `false` | Alpha | 1.10 | 1.11 | | `TokenRequest` | `true` | Beta | 1.12 | | | `TokenRequestProjection` | `false` | Alpha | 1.11 | 1.11 | @@ -143,11 +137,8 @@ content_template: templates/concept | `ValidateProxyRedirects` | `true` | Beta | 1.14 | | | `VolumePVCDataSource` | `false` | Alpha | 1.15 | 1.15 | | `VolumePVCDataSource` | `true` | Beta | 1.16 | | -| `VolumeSubpathEnvExpansion` | `false` | Alpha | 1.14 | 1.14 | -| `VolumeSubpathEnvExpansion` | `true` | Beta | 1.15 | | -| `VolumeSnapshotDataSource` | `false` | Alpha | 1.12 | - | -| `WatchBookmark` | `false` | Alpha | 1.15 | 1.15 | -| `WatchBookmark` | `true` | Beta | 1.16 | | +| `VolumeSnapshotDataSource` | `false` | Alpha | 1.12 | 1.16 | +| `VolumeSnapshotDataSource` | `true` | Beta | 1.17 | - | | `WindowsGMSA` | `false` | Alpha | 1.14 | | | `WindowsGMSA` | `true` | Beta | 1.16 | | | `WinDSR` | `false` | Alpha | 1.14 | | @@ -169,6 +160,12 @@ content_template: templates/concept | `AffinityInAnnotations` | - | Deprecated | 1.8 | - | | `AllowExtTrafficLocalEndpoints` | `false` | Beta | 1.4 | 1.6 | | `AllowExtTrafficLocalEndpoints` | `true` | GA | 1.7 | - | +| `CSINodeInfo` | `false` | Alpha | 1.12 | 1.13 | +| `CSINodeInfo` | `true` | Beta | 1.14 | 1.16 | +| `CSINodeInfo` | `true` | GA | 1.17 | | +| `AttachVolumeLimit` | `false` | Alpha | 1.11 | 1.11 | +| `AttachVolumeLimit` | `true` | Beta | 1.12 | 1.16 | +| `AttachVolumeLimit` | `true` | GA | 1.17 | - | | `CSIPersistentVolume` | `false` | Alpha | 1.9 | 1.9 | | `CSIPersistentVolume` | `true` | Beta | 1.10 | 1.12 | | `CSIPersistentVolume` | `true` | GA | 1.13 | - | @@ -210,6 +207,9 @@ content_template: templates/concept | `MountPropagation` | `false` | Alpha | 1.8 | 1.9 | | `MountPropagation` | `true` | Beta | 1.10 | 1.11 | | `MountPropagation` | `true` | GA | 1.12 | - | +| `NodeLease` | `false` | Alpha | 1.12 | 1.13 | +| `NodeLease` | `true` | Beta | 1.14 | 1.16 | +| `NodeLease` | `true` | GA | 1.17 | - | | `PersistentLocalVolumes` | `false` | Alpha | 1.7 | 1.9 | | `PersistentLocalVolumes` | `true` | Beta | 1.10 | 1.13 | | `PersistentLocalVolumes` | `true` | GA | 1.14 | - | @@ -219,18 +219,40 @@ content_template: templates/concept | `PodReadinessGates` | `false` | Alpha | 1.11 | 1.11 | | `PodReadinessGates` | `true` | Beta | 1.12 | 1.13 | | `PodReadinessGates` | `true` | GA | 1.14 | - | +| `PodShareProcessNamespace` | `false` | Alpha | 1.10 | 1.11 | +| `PodShareProcessNamespace` | `true` | Beta | 1.12 | 1.16 | +| `PodShareProcessNamespace` | `true` | GA | 1.17 | - | | `PVCProtection` | `false` | Alpha | 1.9 | 1.9 | | `PVCProtection` | - | Deprecated | 1.10 | - | +| `RequestManagement` | `false` | Alpha | 1.15 | 1.16 | +| `ResourceQuotaScopeSelectors` | `false` | Alpha | 1.11 | 1.11 | +| `ResourceQuotaScopeSelectors` | `true` | Beta | 1.12 | 1.16 | +| `ResourceQuotaScopeSelectors` | `true` | GA | 1.17 | - | +| `ScheduleDaemonSetPods` | `false` | Alpha | 1.11 | 1.11 | +| `ScheduleDaemonSetPods` | `true` | Beta | 1.12 | 1.16 | +| `ScheduleDaemonSetPods` | `true` | GA | 1.17 | - | +| `ServiceLoadBalancerFinalizer` | `false` | Alpha | 1.15 | 1.15 | +| `ServiceLoadBalancerFinalizer` | `true` | Beta | 1.16 | 1.16 | +| `ServiceLoadBalancerFinalizer` | `true` | GA | 1.17 | - | | `StorageObjectInUseProtection` | `true` | Beta | 1.10 | 1.10 | | `StorageObjectInUseProtection` | `true` | GA | 1.11 | - | | `SupportIPVSProxyMode` | `false` | Alpha | 1.8 | 1.8 | | `SupportIPVSProxyMode` | `false` | Beta | 1.9 | 1.9 | | `SupportIPVSProxyMode` | `true` | Beta | 1.10 | 1.10 | | `SupportIPVSProxyMode` | `true` | GA | 1.11 | - | +| `TaintNodesByCondition` | `false` | Alpha | 1.8 | 1.11 | +| `TaintNodesByCondition` | `true` | Beta | 1.12 | 1.16 | +| `TaintNodesByCondition` | `true` | GA | 1.17 | - | | `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 | | `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 | | `VolumeScheduling` | `true` | GA | 1.13 | - | | `VolumeSubpath` | `true` | GA | 1.13 | - | +| `VolumeSubpathEnvExpansion` | `false` | Alpha | 1.14 | 1.14 | +| `VolumeSubpathEnvExpansion` | `true` | Beta | 1.15 | 1.16 | +| `VolumeSubpathEnvExpansion` | `true` | GA | 1.17 | - | +| `WatchBookmark` | `false` | Alpha | 1.15 | 1.15 | +| `WatchBookmark` | `true` | Beta | 1.16 | 1.16 | +| `WatchBookmark` | `true` | GA | 1.17 | - | {{< /table >}} ## 機能を使用する @@ -270,9 +292,10 @@ GAになってからさらなる変更を加えることは現実的ではない - `Accelerators`: DockerでのNvidia GPUのサポートを有効にします。 - `AdvancedAuditing`: [高度な監査機能](/docs/tasks/debug-application-cluster/audit/#advanced-audit)を有効にします。 -- `AffinityInAnnotations`(*非推奨*): [Podのアフィニティまたはアンチアフィニティ](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)を有効にします。 +- `AffinityInAnnotations`(*非推奨*): [Podのアフィニティまたはアンチアフィニティ](/ja/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)を有効にします。 - `AllowExtTrafficLocalEndpoints`: サービスが外部へのリクエストをノードのローカルエンドポイントにルーティングできるようにします。 - `APIListChunking`: APIクライアントがAPIサーバーからチャンク単位で(`LIST`や`GET`の)リソースを取得できるようにします。 +`APIPriorityAndFairness`: 各サーバーで優先順位付けと公平性を備えた要求の並行性を管理できるようにします(`RequestManagement`から名前が変更されました)。 - `APIResponseCompression`:`LIST`や`GET`リクエストのAPIレスポンスを圧縮します。 - `AppArmor`: Dockerを使用する場合にLinuxノードでAppArmorによる強制アクセスコントロールを有効にします。詳細は[AppArmorチュートリアル](/docs/tutorials/clusters/apparmor/)で確認できます。 - `AttachVolumeLimit`: ボリュームプラグインを有効にすることでノードにアタッチできるボリューム数の制限を設定できます。 @@ -285,11 +308,16 @@ GAになってからさらなる変更を加えることは現実的ではない - `CSIDriverRegistry`: csi.storage.k8s.ioのCSIDriver APIオブジェクトに関連するすべてのロジックを有効にします。 - `CSIInlineVolume`: PodのCSIインラインボリュームサポートを有効にします。 - `CSIMigration`: シムと変換ロジックを有効にしてボリューム操作をKubernetesリポジトリー内のプラグインから対応した事前インストール済みのCSIプラグインにルーティングします。 -- `CSIMigrationAWS`: シムと変換ロジックを有効にしてボリューム操作をKubernetesリポジトリー内のAWS-EBSプラグインからEBS CSIプラグインにルーティングします。 -- `CSIMigrationAzureDisk`: シムと変換ロジックを有効にしてボリューム操作をKubernetesリポジトリー内のAzure-DiskプラグインからAzure Disk CSIプラグインにルーティングします。 -- `CSIMigrationAzureFile`: シムと変換ロジックを有効にしてボリューム操作をKubernetesリポジトリー内のAzure-FileプラグインからAzure File CSIプラグインにルーティングします。 -- `CSIMigrationGCE`: シムと変換ロジックを有効にしてボリューム操作をKubernetesリポジトリー内のGCE-PDプラグインからPD CSIプラグインにルーティングします。 -- `CSIMigrationOpenStack`: シムと変換ロジックを有効にしてボリューム操作をKubernetesリポジトリー内のCinderプラグインからCinder CSIプラグインにルーティングします。 +- `CSIMigrationAWS`: シムと変換ロジックを有効にしてボリューム操作をKubernetesリポジトリー内のAWS-EBSプラグインからEBS CSIプラグインにルーティングします。ノードにEBS CSIプラグインがインストールおよび設定されていない場合、ツリー内のEBSプラグインへのフォールバックをサポートします。CSIMigration機能フラグを有効にする必要があります。 +- `CSIMigrationAWSComplete`: EBSツリー内プラグインのkubeletおよびボリュームコントローラーへの登録を停止し、シムと変換ロジックを有効にして、AWS-EBSツリー内プラグインからEBS CSIプラグインにボリューム操作をルーティングします。 CSIMigrationおよびCSIMigrationAWS機能フラグを有効にし、クラスター内のすべてのノードにEBS CSIプラグインをインストールおよび設定する必要があります。 +- `CSIMigrationAzureDisk`: シムと変換ロジックを有効にしてボリューム操作をKubernetesリポジトリー内のAzure-DiskプラグインからAzure Disk CSIプラグインにルーティングします。ノードにAzureDisk CSIプラグインがインストールおよび設定されていない場合、ツリー内のAzureDiskプラグインへのフォールバックをサポートします。CSIMigration機能フラグを有効にする必要があります。 +- `CSIMigrationAzureDiskComplete`: Azure-Diskツリー内プラグインのkubeletおよびボリュームコントローラーへの登録を停止し、シムと変換ロジックを有効にして、Azure-Diskツリー内プラグインからAzureDisk CSIプラグインにボリューム操作をルーティングします。CSIMigrationおよびCSIMigrationAzureDisk機能フラグを有効にし、クラスター内のすべてのノードにAzureDisk CSIプラグインをインストールおよび設定する必要があります。 +- `CSIMigrationAzureFile`: シムと変換ロジックを有効にしてボリューム操作をKubernetesリポジトリー内のAzure-FileプラグインからAzure File CSIプラグインにルーティングします。ノードにAzureFile CSIプラグインがインストールおよび設定されていない場合、ツリー内のAzureFileプラグインへのフォールバックをサポートします。CSIMigration機能フラグを有効にする必要があります。 +- `CSIMigrationAzureFileComplete`: Azure-Fileツリー内プラグインのkubeletおよびボリュームコントローラーへの登録を停止し、シムと変換ロジックを有効にして、Azure-Fileツリー内プラグインからAzureFile CSIプラグインにボリューム操作をルーティングします。CSIMigrationおよびCSIMigrationAzureFile機能フラグを有効にし、クラスター内のすべてのノードにAzureFile CSIプラグインをインストールおよび設定する必要があります。 +- `CSIMigrationGCE`: シムと変換ロジックを有効にしてボリューム操作をKubernetesリポジトリー内のGCE-PDプラグインからPD CSIプラグインにルーティングします。ノードにPD CSIプラグインがインストールおよび設定されていない場合、ツリー内のGCEプラグインへのフォールバックをサポートします。CSIMigration機能フラグを有効にする必要があります。 +- `CSIMigrationGCEComplete`: GCE-PDのツリー内プラグインのkubeletおよびボリュームコントローラーへの登録を停止し、シムと変換ロジックがGCE-PDのツリー内プラグインからPD CSIプラグインにボリューム操作をルーティングできるようにします。CSIMigrationおよびCSIMigrationGCE機能フラグを有効にし、クラスター内のすべてのノードにPD CSIプラグインをインストールおよび設定する必要があります。 +- `CSIMigrationOpenStack`: シムと変換ロジックを有効にしてボリューム操作をKubernetesリポジトリー内のCinderプラグインからCinder CSIプラグインにルーティングします。ノードにCinder CSIプラグインがインストールおよび設定されていない場合、ツリー内のCinderプラグインへのフォールバックをサポートします。CSIMigration機能フラグを有効にする必要があります。 +- `CSIMigrationOpenStackComplete`: Cinderのツリー内プラグインのkubeletおよびボリュームコントローラーへの登録を停止し、シムと変換ロジックがCinderのツリー内プラグインからCinder CSIプラグインにボリューム操作をルーティングできるようにします。CSIMigrationおよびCSIMigrationOpenStack機能フラグを有効にし、クラスター内のすべてのノードにCinder CSIプラグインをインストールおよび設定する必要があります。 - `CSINodeInfo`: csi.storage.k8s.ioのCSINodeInfo APIオブジェクトに関連するすべてのロジックを有効にします。 - `CSIPersistentVolume`: [CSI(Container Storage Interface)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)互換のボリュームプラグインを通してプロビジョニングされたボリュームの検出とマウントを有効にします。 詳細については[`csi`ボリュームタイプ](/docs/concepts/storage/volumes/#csi)ドキュメントを確認してください。 @@ -314,7 +342,7 @@ GAになってからさらなる変更を加えることは現実的ではない - `ExpandPersistentVolumes`: 永続ボリュームの拡張を有効にします。[永続ボリューム要求の拡張](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims)を参照してください。 - `ExperimentalCriticalPodAnnotation`: [スケジューリングが保証されるよう](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/)に特定のpodへの *クリティカル* の注釈を加える設定を有効にします。 - `ExperimentalHostUserNamespaceDefaultingGate`: ホストするデフォルトのユーザー名前空間を有効にします。これは他のホストの名前空間やホストのマウントを使用しているコンテナ、特権を持つコンテナ、または名前空間のない特定の機能(たとえば`MKNODE`、`SYS_MODULE`など)を使用しているコンテナ用です。これはDockerデーモンでユーザー名前空間の再マッピングが有効になっている場合にのみ有効にすべきです。 -- `EndpointSlice`: よりスケーラブルで拡張可能なネットワークエンドポイントのエンドポイントスライスを有効にします。対応するAPIとコントローラーを有効にする必要があります。[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpoint-slices/)をご覧ください。 +- `EndpointSlice`: よりスケーラブルで拡張可能なネットワークエンドポイントのエンドポイントスライスを有効にします。対応するAPIとコントローラーを有効にする必要があります。[Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/)をご覧ください。 - `GCERegionalPersistentDisk`: GCEでリージョナルPD機能を有効にします。 - `HugePages`: 事前に割り当てられた[huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/)の割り当てと消費を有効にします。 - `HyperVContainer`: Windowsコンテナの[Hyper-Vによる分離](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)を有効にします。 @@ -339,28 +367,27 @@ GAになってからさらなる変更を加えることは現実的ではない - `PVCProtection`: 永続ボリューム要求(PVC)がPodでまだ使用されているときに削除されないようにします。詳細は[ここ](/docs/tasks/administer-cluster/storage-object-in-use-protection/)で確認できます。 - `QOSReserved`: QoSレベルでのリソース予約を許可して、低いQoSレベルのポッドが高いQoSレベルで要求されたリソースにバーストするのを防ぎます(現時点ではメモリのみ)。 - `ResourceLimitsPriorityFunction`: 入力したPodのCPU制限とメモリ制限の少なくとも1つを満たすノードに対して最低スコアを1に割り当てるスケジューラー優先機能を有効にします。その目的は同じスコアを持つノード間の関係を断つことです。 -- `RequestManagement`: 各サーバーで優先順位付けと公平性を備えたリクエストの並行性の管理機能を有効にしました。 - `ResourceQuotaScopeSelectors`: リソース割当のスコープセレクターを有効にします。 - `RotateKubeletClientCertificate`: kubeletでクライアントTLS証明書のローテーションを有効にします。詳細は[kubeletの設定](/docs/tasks/administer-cluster/storage-object-in-use-protection/)で確認できます。 - `RotateKubeletServerCertificate`: kubeletでサーバーTLS証明書のローテーションを有効にします。詳細は[kubeletの設定](/docs/tasks/administer-cluster/storage-object-in-use-protection/)で確認できます。 - `RunAsGroup`: コンテナの初期化プロセスで設定されたプライマリグループIDの制御を有効にします。 - `RuntimeClass`: コンテナのランタイム構成を選択するには[RuntimeClass](/docs/concepts/containers/runtime-class/)機能を有効にします。 - `ScheduleDaemonSetPods`: DaemonSetのPodをDaemonSetコントローラーではなく、デフォルトのスケジューラーによってスケジュールされるようにします。 -- `SCTPSupport`: `Service`、`Endpoint`、`NetworkPolicy`、`Pod`の定義で`protocol`の値としてSCTPを使用できるようにします +- `SCTPSupport`: `Service`、`Endpoints`、`NetworkPolicy`、`Pod`の定義で`protocol`の値としてSCTPを使用できるようにします - `ServerSideApply`: APIサーバーで[サーバーサイドApply(SSA)](/docs/reference/using-api/api-concepts/#server-side-apply)のパスを有効にします。 - `ServiceLoadBalancerFinalizer`: サービスロードバランサーのファイナライザー保護を有効にします。 -- `ServiceNodeExclusion`: クラウドプロバイダーによって作成されたロードバランサーからのノードの除外を有効にします。"`alpha.service-controller.kubernetes.io/exclude-balancer`"キーでラベル付けされている場合ノードは除外の対象となります。 +- `ServiceNodeExclusion`: クラウドプロバイダーによって作成されたロードバランサーからのノードの除外を有効にします。"`alpha.service-controller.kubernetes.io/exclude-balancer`"キーまたは`node.kubernetes.io/exclude-from-external-load-balancers`でラベル付けされている場合ノードは除外の対象となります。 - `StartupProbe`: kubeletで[startup](/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-startup-probe)プローブを有効にします。 - `StorageObjectInUseProtection`: PersistentVolumeまたはPersistentVolumeClaimオブジェクトがまだ使用されている場合、それらの削除を延期します。 - `StorageVersionHash`: apiserversがディスカバリーでストレージのバージョンハッシュを公開できるようにします。 - `StreamingProxyRedirects`: ストリーミングリクエストのバックエンド(kubelet)からのリダイレクトをインターセプト(およびフォロー)するようAPIサーバーに指示します。ストリーミングリクエストの例には`exec`、`attach`、`port-forward`リクエストが含まれます。 -- `SupportIPVSProxyMode`: IPVSを使用したクラスター内サービスの負荷分散の提供を有効にします。詳細は[サービスプロキシ](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)で確認できます。 +- `SupportIPVSProxyMode`: IPVSを使用したクラスター内サービスの負荷分散の提供を有効にします。詳細は[サービスプロキシー](/ja/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)で確認できます。 - `SupportPodPidsLimit`: PodのPID制限のサポートを有効にします。 - `Sysctls`: 各podに設定できる名前空間付きのカーネルパラメーター(sysctl)のサポートを有効にします。詳細は[sysctls](/docs/tasks/administer-cluster/sysctl-cluster/)で確認できます。 - `TaintBasedEvictions`: ノードの汚染とpodの許容に基づいてノードからpodを排除できるようにします。。詳細は[汚染と許容](/docs/concepts/configuration/taint-and-toleration/)で確認できます。 -- `TaintNodesByCondition`: [ノードの条件](/docs/concepts/architecture/nodes/#condition)に基づいてノードの自動汚染を有効にします。 +- `TaintNodesByCondition`: [ノードの条件](/ja/docs/concepts/architecture/nodes/#condition)に基づいてノードの自動汚染を有効にします。 - `TokenRequest`: サービスアカウントリソースで`TokenRequest`エンドポイントを有効にします。 -- `TokenRequestProjection`: [投影ボリューム](/docs/concepts/storage/volumes/#projected)を使用したpodへのサービスアカウントのトークンの注入を有効にします。 +- `TokenRequestProjection`: [Projectedボリューム](/docs/concepts/storage/volumes/#projected)を使用したpodへのサービスアカウントのトークンの注入を有効にします。 - `TTLAfterFinished`: [TTLコントローラー](/docs/concepts/workloads/controllers/ttlafterfinished/)が実行終了後にリソースをクリーンアップできるようにします。 - `VolumePVCDataSource`: 既存のPVCをデータソースとして指定するサポートを有効にします。 - `VolumeScheduling`: ボリュームトポロジー対応のスケジューリングを有効にし、PersistentVolumeClaim(PVC)バインディングにスケジューリングの決定を認識させます。また`PersistentLocalVolumes`フィーチャーゲートと一緒に使用すると[`local`](/docs/concepts/storage/volumes/#local)ボリュームタイプの使用が可能になります。 diff --git a/content/ja/docs/reference/glossary/deployment.md b/content/ja/docs/reference/glossary/deployment.md index 6d2bd3544bde9..d3483b58fbf44 100755 --- a/content/ja/docs/reference/glossary/deployment.md +++ b/content/ja/docs/reference/glossary/deployment.md @@ -2,7 +2,7 @@ title: Deployment id: deployment date: 2018-04-12 -full_link: /docs/concepts/workloads/controllers/deployment/ +full_link: /ja/docs/concepts/workloads/controllers/deployment/ short_description: > 複製されたアプリケーションを管理するAPIオブジェクト。 diff --git a/content/ja/docs/reference/glossary/node.md b/content/ja/docs/reference/glossary/node.md index 5cc4b3481d26c..0eb8976c1b1c0 100755 --- a/content/ja/docs/reference/glossary/node.md +++ b/content/ja/docs/reference/glossary/node.md @@ -2,7 +2,7 @@ title: ノード id: node date: 2018-04-12 -full_link: /docs/concepts/architecture/nodes/ +full_link: /ja/docs/concepts/architecture/nodes/ short_description: > ノードはKubernetesのワーカーマシンです。 diff --git a/content/ja/docs/reference/glossary/pod.md b/content/ja/docs/reference/glossary/pod.md index 5a12c7d11c106..432733e2b8f57 100755 --- a/content/ja/docs/reference/glossary/pod.md +++ b/content/ja/docs/reference/glossary/pod.md @@ -2,7 +2,7 @@ title: Pod id: pod date: 2018-04-12 -full_link: /docs/concepts/workloads/pods/pod-overview/ +full_link: /ja/docs/concepts/workloads/pods/pod-overview/ short_description: > 一番小さく一番シンプルな Kubernetes のオブジェクト。Pod とはクラスターで動作しているいくつかのコンテナのまとまりです。 diff --git a/content/ja/docs/reference/glossary/service-catalog.md b/content/ja/docs/reference/glossary/service-catalog.md new file mode 100755 index 0000000000000..328bcf7280404 --- /dev/null +++ b/content/ja/docs/reference/glossary/service-catalog.md @@ -0,0 +1,16 @@ +--- +title: サービスカタログ +id: service-catalog +date: 2018-04-12 +full_link: +short_description: > + Kubernetesクラスターで稼働するアプリケーションが、クラウドプロバイダーによって提供されるデータストアサービスのように、外部のマネージドソフトウェアを容易に使えるようにするための拡張APIです。 + +aka: +tags: +- extension +--- + Kubernetesクラスターで稼働するアプリケーションが、クラウドプロバイダーによって提供されるデータストアサービスのように、外部のマネージドソフトウェアを容易に使えるようにするための拡張APIです。 + + +サービスカタログを使用することで{{< glossary_tooltip text="サービスブローカー" term_id="service-broker" >}}が提供する{{< glossary_tooltip text="マネージドサービス" term_id="managed-service" >}}を、それらのサービスがどのように作成されるか、また管理されるかについての知識を無しに、一覧表示したり、プロビジョニングや使用をすることができます。 diff --git a/content/ja/docs/reference/glossary/service.md b/content/ja/docs/reference/glossary/service.md index 9b43dec6f8372..212c3acce1059 100755 --- a/content/ja/docs/reference/glossary/service.md +++ b/content/ja/docs/reference/glossary/service.md @@ -2,7 +2,7 @@ title: Service id: service date: 2018-04-12 -full_link: /docs/concepts/services-networking/service/ +full_link: /ja/docs/concepts/services-networking/service/ short_description: > Podの集合で実行されているアプリケーションをネットワークサービスとして公開する方法。 diff --git a/content/ja/docs/reference/glossary/statefulset.md b/content/ja/docs/reference/glossary/statefulset.md index 3b23ea48cebc3..bcb947367d751 100755 --- a/content/ja/docs/reference/glossary/statefulset.md +++ b/content/ja/docs/reference/glossary/statefulset.md @@ -2,7 +2,7 @@ title: StatefulSet id: statefulset date: 2018-04-12 -full_link: /docs/concepts/workloads/controllers/statefulset/ +full_link: /ja/docs/concepts/workloads/controllers/statefulset/ short_description: > Manages the deployment and scaling of a set of Pods, *and provides guarantees about the ordering and uniqueness* of these Pods. diff --git a/content/ja/docs/setup/learning-environment/minikube.md b/content/ja/docs/setup/learning-environment/minikube.md index e9884c44e0d17..c626ae23edc15 100644 --- a/content/ja/docs/setup/learning-environment/minikube.md +++ b/content/ja/docs/setup/learning-environment/minikube.md @@ -24,7 +24,7 @@ Minikubeはローカル環境でKubernetesを簡単に実行するためのツ ## インストール -[Minikubeのインストール](/docs/tasks/tools/install-minikube/) を参照 +[Minikubeのインストール](/ja/docs/tasks/tools/install-minikube/) を参照 ## クイックスタート diff --git a/content/ja/docs/setup/production-environment/tools/kops.md b/content/ja/docs/setup/production-environment/tools/kops.md index 0b44da785b1a2..ba2914f9669c1 100644 --- a/content/ja/docs/setup/production-environment/tools/kops.md +++ b/content/ja/docs/setup/production-environment/tools/kops.md @@ -31,7 +31,7 @@ a building block. kops builds on the kubeadm work. #### 要件 -You must have [kubectl](/docs/tasks/tools/install-kubectl/) installed in order for kops to work. +You must have [kubectl](/ja/docs/tasks/tools/install-kubectl/) installed in order for kops to work. #### インストール diff --git a/content/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index 9a7973469ff65..426ca84b25b58 100644 --- a/content/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -91,7 +91,7 @@ update-alternatives --set iptables /usr/sbin/iptables-legacy | TCP | Inbound | 10250 | Kubelet API | Self, Control plane | | TCP | Inbound | 30000-32767 | NodePort Services** | All | -** [NodePort Services](/docs/concepts/services-networking/service/)のデフォルトのポートの範囲 +** [NodePort Services](/ja/docs/concepts/services-networking/service/)のデフォルトのポートの範囲 \*の項目は書き換え可能です。そのため、あなたが指定したカスタムポートも開いていることを確認する必要があります。 @@ -138,7 +138,7 @@ Linux以外のノードでは、デフォルトで使用されるコンテナラ kubeadmは`kubelet`や`kubectl`をインストールまたは管理**しない**ため、kubeadmにインストールするKubernetesコントロールプレーンのバージョンと一致させる必要があります。そうしないと、予期しないバグのある動作につながる可能性のあるバージョン差異(version skew)が発生するリスクがあります。ただし、kubeletとコントロールプレーン間のマイナーバージョン差異(minor version skew)は_1つ_サポートされていますが、kubeletバージョンがAPIサーバーのバージョンを超えることはできません。たとえば、1.7.0を実行するkubeletは1.8.0 APIサーバーと完全に互換性がありますが、その逆はできません。 -`kubectl`のインストールに関する詳細情報は、[kubectlのインストールおよびセットアップ](/docs/tasks/tools/install-kubectl/)を参照してください。 +`kubectl`のインストールに関する詳細情報は、[kubectlのインストールおよびセットアップ](/ja/docs/tasks/tools/install-kubectl/)を参照してください。 {{< warning >}} これらの手順はシステムアップグレードによるすべてのKubernetesパッケージの更新を除きます。これはkubeadmとKubernetesが[アップグレードにおける特別な注意](docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)を必要とするからです。 diff --git a/content/ja/docs/setup/production-environment/tools/kubeadm/self-hosting.md b/content/ja/docs/setup/production-environment/tools/kubeadm/self-hosting.md index b3f9eeeb91fc8..da61fca3f9b42 100644 --- a/content/ja/docs/setup/production-environment/tools/kubeadm/self-hosting.md +++ b/content/ja/docs/setup/production-environment/tools/kubeadm/self-hosting.md @@ -10,7 +10,7 @@ weight: 100 As of 1.8, you can experimentally create a _self-hosted_ Kubernetes control plane. This means that key components such as the API server, controller -manager, and scheduler run as [DaemonSet pods](/docs/concepts/workloads/controllers/daemonset/) +manager, and scheduler run as [DaemonSet pods](/ja/docs/concepts/workloads/controllers/daemonset/) configured via the Kubernetes API instead of [static pods](/docs/tasks/administer-cluster/static-pod/) configured in the kubelet via static files. diff --git a/content/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md index b706cb041baef..0021ac6cee3fe 100644 --- a/content/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md +++ b/content/ja/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md @@ -126,7 +126,7 @@ Calico, Canal, and Flannel CNI providers are verified to support HostPort. For more information, see the [CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md). If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of -services](/docs/concepts/services-networking/service/#nodeport) or use `HostNetwork=true`. +services](/ja/docs/concepts/services-networking/service/#nodeport) or use `HostNetwork=true`. ## サービスIP経由でPodにアクセスすることができない diff --git a/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index 29778a26302e9..c2406c7cc9daa 100644 --- a/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/ja/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -1,7 +1,4 @@ --- -reviewers: -- michmike -- patricklang title: Intro to Windows support in Kubernetes content_template: templates/concept weight: 65 diff --git a/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md index 5fd9fd5d0c906..44d136f60b934 100644 --- a/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md +++ b/content/ja/docs/setup/production-environment/windows/user-guide-windows-containers.md @@ -1,7 +1,4 @@ --- -reviewers: -- michmike -- patricklang title: Guide for scheduling Windows containers in Kubernetes content_template: templates/concept weight: 75 diff --git a/content/ja/docs/setup/production-environment/windows/user-guide-windows-nodes.md b/content/ja/docs/setup/production-environment/windows/user-guide-windows-nodes.md index 1fbeb52fde3db..da91d2c18f1c2 100644 --- a/content/ja/docs/setup/production-environment/windows/user-guide-windows-nodes.md +++ b/content/ja/docs/setup/production-environment/windows/user-guide-windows-nodes.md @@ -1,7 +1,4 @@ --- -reviewers: -- michmike -- patricklang title: Guide for adding Windows Nodes in Kubernetes content_template: templates/concept weight: 70 diff --git a/content/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md new file mode 100644 index 0000000000000..fd5784a0932ee --- /dev/null +++ b/content/ja/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md @@ -0,0 +1,335 @@ +--- +title: 複数のクラスターへのアクセスを設定する +content_template: templates/task +weight: 30 +card: + name: tasks + weight: 40 +--- + + +{{% capture overview %}} + +ここでは、設定ファイルを使って複数のクラスターにアクセスする方法を紹介します。クラスター、ユーザー、contextの情報を一つ以上の設定ファイルにまとめることで、`kubectl config use-context`のコマンドを使ってクラスターを素早く切り替えることができます。 + +{{< note >}} +クラスターへのアクセスを設定するファイルを、*kubeconfig* ファイルと呼ぶことがあります。これは設定ファイルの一般的な呼び方です。`kubeconfig`という名前のファイルが存在するわけではありません。 +{{< /note >}} + +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + +## クラスター、ユーザー、contextを設定する + +例として、開発用のクラスターが一つ、実験用のクラスターが一つ、計二つのクラスターが存在する場合を考えます。`development`と呼ばれる開発用のクラスター内では、フロントエンドの開発者は`frontend`というnamespace内で、ストレージの開発者は`storage`というnamespace内で作業をします。`scratch`と呼ばれる実験用のクラスター内では、開発者はデフォルトのnamespaceで作業をするか、状況に応じて追加のnamespaceを作成します。開発用のクラスターは証明書を通しての認証を必要とします。実験用のクラスターはユーザーネームとパスワードを通しての認証を必要とします。 + +`config-exercise`というディレクトリを作成してください。`config-exercise`ディレクトリ内に、以下を含む`config-demo`というファイルを作成してください: + +```shell +apiVersion: v1 +kind: Config +preferences: {} + +clusters: +- cluster: + name: development +- cluster: + name: scratch + +users: +- name: developer +- name: experimenter + +contexts: +- context: + name: dev-frontend +- context: + name: dev-storage +- context: + name: exp-scratch +``` + +設定ファイルには、クラスター、ユーザー、contextの情報が含まれています。上記の`config-demo`設定ファイルには、二つのクラスター、二人のユーザー、三つのcontextの情報が含まれています。 + +`config-exercise`ディレクトリに移動してください。クラスター情報を設定ファイルに追加するために、以下のコマンドを実行してください: + +```shell +kubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file +kubectl config --kubeconfig=config-demo set-cluster scratch --server=https://5.6.7.8 --insecure-skip-tls-verify +``` + +ユーザー情報を設定ファイルに追加してください: + +```shell +kubectl config --kubeconfig=config-demo set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile +kubectl config --kubeconfig=config-demo set-credentials experimenter --username=exp --password=some-password +``` + +{{< note >}} +`kubectl config unset users.`を実行すると、ユーザーを削除することができます。 +{{< /note >}} + +context情報を設定ファイルに追加してください: + +```shell +kubectl config --kubeconfig=config-demo set-context dev-frontend --cluster=development --namespace=frontend --user=developer +kubectl config --kubeconfig=config-demo set-context dev-storage --cluster=development --namespace=storage --user=developer +kubectl config --kubeconfig=config-demo set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter +``` + +追加した情報を確認するために、`config-demo`ファイルを開いてください。`config-demo`ファイルを開く代わりに、`config view`のコマンドを使うこともできます。 + +```shell +kubectl config --kubeconfig=config-demo view +``` + +出力には、二つのクラスター、二人のユーザー、三つのcontextが表示されます: + +```shell +apiVersion: v1 +clusters: +- cluster: + certificate-authority: fake-ca-file + server: https://1.2.3.4 + name: development +- cluster: + insecure-skip-tls-verify: true + server: https://5.6.7.8 + name: scratch +contexts: +- context: + cluster: development + namespace: frontend + user: developer + name: dev-frontend +- context: + cluster: development + namespace: storage + user: developer + name: dev-storage +- context: + cluster: scratch + namespace: default + user: experimenter + name: exp-scratch +current-context: "" +kind: Config +preferences: {} +users: +- name: developer + user: + client-certificate: fake-cert-file + client-key: fake-key-file +- name: experimenter + user: + password: some-password + username: exp +``` + +上記の`fake-ca-file`、`fake-cert-file`、`fake-key-file`は、証明書ファイルの実際のパスのプレースホルダーです。環境内にある証明書ファイルの実際のパスに変更してください。 + +証明書ファイルのパスの代わりにbase64にエンコードされたデータを使用したい場合は、キーに`-data`の接尾辞を加えてください。例えば、`certificate-authority-data`、`client-certificate-data`、`client-key-data`とできます。 + +それぞれのcontextは、クラスター、ユーザー、namespaceの三つ組からなっています。例えば、`dev-frontend`contextは、`developer`ユーザーの認証情報を使って`development`クラスターの`frontend`namespaceへのアクセスを意味しています。 + +現在のcontextを設定してください: + +```shell +kubectl config --kubeconfig=config-demo use-context dev-frontend +``` + +これ以降実行される`kubectl`コマンドは、`dev-frontend`contextに設定されたクラスターとnamespaceに適用されます。また、`dev-frontend`contextに設定されたユーザーの認証情報を使用します。 + +現在のcontextの設定情報のみを確認するには、`--minify`フラグを使用してください。 + +```shell +kubectl config --kubeconfig=config-demo view --minify +``` + +出力には、`dev-frontend`contextの設定情報が表示されます: + +```shell +apiVersion: v1 +clusters: +- cluster: + certificate-authority: fake-ca-file + server: https://1.2.3.4 + name: development +contexts: +- context: + cluster: development + namespace: frontend + user: developer + name: dev-frontend +current-context: dev-frontend +kind: Config +preferences: {} +users: +- name: developer + user: + client-certificate: fake-cert-file + client-key: fake-key-file +``` + +今度は、実験用のクラスター内でしばらく作業する場合を考えます。 + +現在のcontextを`exp-scratch`に切り替えてください: + +```shell +kubectl config --kubeconfig=config-demo use-context exp-scratch +``` + +これ以降実行される`kubectl`コマンドは、`scratch`クラスター内のデフォルトnamespaceに適用されます。また、`exp-scratch`contextに設定されたユーザーの認証情報を使用します。 + +新しく切り替えた`exp-scratch`contextの設定を確認してください。 + +```shell +kubectl config --kubeconfig=config-demo view --minify +``` + +最後に、`development`クラスター内の`storage`namespaceでしばらく作業する場合を考えます。 + +現在のcontextを`dev-storage`に切り替えてください: + +```shell +kubectl config --kubeconfig=config-demo use-context dev-storage +``` + +新しく切り替えた`dev-storage`contextの設定を確認してください。 + +```shell +kubectl config --kubeconfig=config-demo view --minify +``` + +## 二つ目の設定ファイルを作成する + +`config-exercise`ディレクトリ内に、以下を含む`config-demo-2`というファイルを作成してください: + +```shell +apiVersion: v1 +kind: Config +preferences: {} + +contexts: +- context: + cluster: development + namespace: ramp + user: developer + name: dev-ramp-up +``` + +上記の設定ファイルは、`dev-ramp-up`というcontextを表します。 + +## KUBECONFIG環境変数を設定する + +`KUBECONFIG`という環境変数が存在するかを確認してください。もし存在する場合は、後で復元できるようにバックアップしてください。例えば: + +### Linux +```shell +export KUBECONFIG_SAVED=$KUBECONFIG +``` +### Windows PowerShell +```shell +$Env:KUBECONFIG_SAVED=$ENV:KUBECONFIG +``` + +`KUBECONFIG`環境変数は、設定ファイルのパスのリストです。リスト内のパスはLinuxとMacではコロンで区切られ、Windowsではセミコロンで区切られます。`KUBECONFIG`環境変数が存在する場合は、リスト内の設定ファイルの内容を確認してください。 + +一時的に`KUBECONFIG`環境変数に以下の二つのパスを追加してください。例えば:
+ +### Linux +```shell +export KUBECONFIG=$KUBECONFIG:config-demo:config-demo-2 +``` +### Windows PowerShell +```shell +$Env:KUBECONFIG=("config-demo;config-demo-2") +``` + +`config-exercise`ディレクトリ内から、以下のコマンドを実行してください: + +```shell +kubectl config view +``` + +出力には、`KUBECONFIG`環境変数に含まれる全てのファイルの情報がまとめて表示されます。`config-demo-2`ファイルに設定された`dev-ramp-up`contextの情報と、`config-demo`ファイルに設定された三つのcontextの情報がまとめてあることに注目してください: + +```shell +contexts: +- context: + cluster: development + namespace: frontend + user: developer + name: dev-frontend +- context: + cluster: development + namespace: ramp + user: developer + name: dev-ramp-up +- context: + cluster: development + namespace: storage + user: developer + name: dev-storage +- context: + cluster: scratch + namespace: default + user: experimenter + name: exp-scratch +``` + +kubeconfigファイルに関するさらなる情報を参照するには、[kubeconfigファイルを使ってクラスターへのアクセスを管理する](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)を参照してください。 + +## $HOME/.kubeディレクトリの内容を確認する + +既にクラスターを所持していて、`kubectl`を使ってクラスターを操作できる場合は、`$HOME/.kube`ディレクトリ内に`config`というファイルが存在する可能性が高いです。 + +`$HOME/.kube`に移動して、そこに存在するファイルを確認してください。`config`という設定ファイルが存在するはずです。他の設定ファイルも存在する可能性があります。全てのファイルの中身を確認してください。 + +## $HOME/.kube/configをKUBECONFIG環境変数に追加する + +もし`$HOME/.kube/config`ファイルが存在していて、既に`KUBECONFIG`環境変数に追加されていない場合は、`KUBECONFIG`環境変数に追加してください。例えば: + +### Linux +```shell +export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config +``` +### Windows Powershell +```shell +$Env:KUBECONFIG=($Env:KUBECONFIG;$HOME/.kube/config) +``` + +`KUBECONFIG`環境変数内のファイルからまとめられた設定情報を確認してください。`config-exercise`ディレクトリ内から、以下のコマンドを実行してください: + +```shell +kubectl config view +``` + +## クリーンアップ + +`KUBECONFIG`環境変数を元に戻してください。例えば: + +Linux: +```shell +export KUBECONFIG=$KUBECONFIG_SAVED +``` +Windows PowerShell +```shell +$Env:KUBECONFIG=$ENV:KUBECONFIG_SAVED +``` + +{{% /capture %}} + +{{% capture whatsnext %}} + +* [kubeconfigファイルを使ってクラスターへのアクセスを管理する](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) +* [kubectl config](/docs/reference/generated/kubectl/kubectl-commands#config) + +{{% /capture %}} \ No newline at end of file diff --git a/content/ja/docs/tasks/access-application-cluster/connecting-frontend-backend.md b/content/ja/docs/tasks/access-application-cluster/connecting-frontend-backend.md index 8d2a48b82ee8b..9ff0a604556a6 100644 --- a/content/ja/docs/tasks/access-application-cluster/connecting-frontend-backend.md +++ b/content/ja/docs/tasks/access-application-cluster/connecting-frontend-backend.md @@ -27,7 +27,7 @@ weight: 70 * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * このタスクでは[Serviceで外部ロードバランサー](/docs/tasks/access-application-cluster/create-external-load-balancer/)を使用しますが、外部ロードバランサーの使用がサポートされている環境である必要があります。 - ご使用の環境がこれをサポートしていない場合は、代わりにタイプ[NodePort](/docs/concepts/services-networking/service/#nodeport)のServiceを使用できます。 + ご使用の環境がこれをサポートしていない場合は、代わりにタイプ[NodePort](/ja/docs/concepts/services-networking/service/#nodeport)のServiceを使用できます。 {{% /capture %}} @@ -189,8 +189,8 @@ curl http://${EXTERNAL_IP} # これを前に見たEXTERNAL-IPに置き換えま {{% capture whatsnext %}} -* [Services](/docs/concepts/services-networking/service/)の詳細 -* [ConfigMaps](/docs/tasks/configure-pod-container/configure-pod-configmap/)の詳細 +* [Service](/ja/docs/concepts/services-networking/service/)の詳細 +* [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/)の詳細 {{% /capture %}} diff --git a/content/ja/docs/tasks/access-application-cluster/service-access-application-cluster.md b/content/ja/docs/tasks/access-application-cluster/service-access-application-cluster.md index 2958ed32422f0..48be31fdb45f6 100644 --- a/content/ja/docs/tasks/access-application-cluster/service-access-application-cluster.md +++ b/content/ja/docs/tasks/access-application-cluster/service-access-application-cluster.md @@ -37,7 +37,7 @@ weight: 60 kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080 ``` このコマンドは - [Deployment](/docs/concepts/workloads/controllers/deployment/) + [Deployment](/ja/docs/concepts/workloads/controllers/deployment/) オブジェクトとそれに紐付く [ReplicaSet](/ja/docs/concepts/workloads/controllers/replicaset/) オブジェクトを作成します。ReplicaSetは、Hello Worldアプリケーションが稼働している2つの @@ -115,7 +115,7 @@ weight: 60 ## service configuration fileの利用 `kubectl expose`コマンドの代わりに、 -[service configuration file](/docs/concepts/services-networking/service/) +[service configuration file](/ja/docs/concepts/services-networking/service/) を使用してServiceを作成することもできます。 {{% /capture %}} diff --git a/content/ja/docs/tasks/administer-cluster/enabling-endpointslices.md b/content/ja/docs/tasks/administer-cluster/enabling-endpointslices.md new file mode 100644 index 0000000000000..736e6eb1c3b58 --- /dev/null +++ b/content/ja/docs/tasks/administer-cluster/enabling-endpointslices.md @@ -0,0 +1,44 @@ +--- +title: EndpointSliceの有効化 +content_template: templates/task +--- + +{{% capture overview %}} +このページはKubernetesのEndpointSliceの有効化の概要を説明します。 +{{% /capture %}} + + +{{% capture prerequisites %}} + {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +{{% /capture %}} + +{{% capture steps %}} + +## 概要 + +EndpointSliceは、KubernetesのEndpointsに対してスケーラブルで拡張可能な代替手段を提供します。Endpointsが提供する機能のベースの上に構築し、スケーラブルな方法で拡張します。Serviceが多数(100以上)のネットワークエンドポイントを持つ場合、それらは単一の大きなEndpointsリソースではなく、複数の小さなEndpointSliceに分割されます。 + +## EndpointSliceの有効化 + +{{< feature-state for_k8s_version="v1.17" state="beta" >}} + +{{< note >}} +EndpointSliceは、最終的には既存のEndpointsを置き換える可能性がありますが、多くのKubernetesコンポーネントはまだ既存のEndpointsに依存しています。現時点ではEndpointSliceを有効化することは、Endpointsの置き換えではなく、クラスター内のEndpointsへの追加とみなされる必要があります。 +{{< /note >}} + +EndpointSliceはベータ版の機能とみなされますが、デフォルトではAPIのみが有効です。kube-proxyによるEndpointSliceコントローラーとEndpointSliceの使用は、デフォルトでは有効になっていません。 + +EndpointSliceコントローラーはクラスター内にEndpointSliceを作成し、管理します。これは、{{< glossary_tooltip text="kube-apiserver" term_id="kube-apiserver" >}}と{{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}の`EndpointSlice`の[フィーチャーゲート](/docs/reference/command-line-tools-reference/feature-gates/)で有効にできます(`--feature-gates=EndpointSlice=true`)。 + +スケーラビリティ向上のため、{{}}でフィーチャーゲートを有効にして、Endpointsの代わりにEndpointSliceをデータソースとして使用することもできます。 + +## EndpointSliceの使用 + +クラスター内でEndpointSliceを完全に有効にすると、各Endpointsリソースに対応するEndpointSliceリソースが表示されます。既存のEndpointsの機能をサポートすることに加えて、EndpointSliceはトポロジーなどの新しい情報を含める必要があります。これらにより、クラスター内のネットワークエンドポイントのスケーラビリティと拡張性が大きく向上します。 + +{{% capture whatsnext %}} + +* [EndpointSlice](/docs/concepts/services-networking/endpoint-slices/)を参照してください。 +* [サービスとアプリケーションの接続](/ja/docs/concepts/services-networking/connect-applications-service/)を参照してください。 + +{{% /capture %}} diff --git a/content/ja/docs/tasks/configure-pod-container/configure-projected-volume-storage.md b/content/ja/docs/tasks/configure-pod-container/configure-projected-volume-storage.md new file mode 100644 index 0000000000000..5ee17ea721330 --- /dev/null +++ b/content/ja/docs/tasks/configure-pod-container/configure-projected-volume-storage.md @@ -0,0 +1,81 @@ +--- +title: ストレージにProjectedボリュームを使用するようPodを設定する +content_template: templates/task +weight: 70 +--- + +{{% capture overview %}} +このページでは、[`projected`](/docs/concepts/storage/volumes/#projected)(投影)ボリュームを使用して、既存の複数のボリュームソースを同一ディレクトリ内にマウントする方法を説明します。 +現在、`secret`、`configMap`、`downwardAPI`および`serviceAccountToken`ボリュームを投影できます。 + +{{< note >}} +`serviceAccountToken`はボリュームタイプではありません。 +{{< /note >}} +{{% /capture %}} + +{{% capture prerequisites %}} +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +{{% /capture %}} + +{{% capture steps %}} +## ProjectedボリュームをPodに設定する + +この課題では、ローカルファイルからユーザーネームおよびパスワードの{{< glossary_tooltip text="Secret" term_id="secret" >}}を作成します。 +次に、単一のコンテナを実行するPodを作成し、[`projected`](/docs/concepts/storage/volumes/#projected)ボリュームを使用してそれぞれのSecretを同じ共有ディレクトリにマウントします。 + +以下にPodの設定ファイルを示します: + +{{< codenew file="pods/storage/projected.yaml" >}} + +1. Secretを作成します: + + ```shell + # ユーザーネームおよびパスワードを含むファイルを作成します: + echo -n "admin" > ./username.txt + echo -n "1f2d1e2e67df" > ./password.txt + + # これらのファイルからSecretを作成します: + kubectl create secret generic user --from-file=./username.txt + kubectl create secret generic pass --from-file=./password.txt + ``` +1. Podを作成します: + + ```shell + kubectl apply -f https://k8s.io/examples/pods/storage/projected.yaml + ``` +1. Pod内のコンテナが実行されていることを確認するため、Podの変更を監視します: + + ```shell + kubectl get --watch pod test-projected-volume + ``` + 出力は次のようになります: + ``` + NAME READY STATUS RESTARTS AGE + test-projected-volume 1/1 Running 0 14s + ``` +1. 別の端末にて、実行中のコンテナへのシェルを取得します: + + ```shell + kubectl exec -it test-projected-volume -- /bin/sh + ``` +1. シェル内にて、投影されたソースを含む`projected-volume`ディレクトリが存在することを確認します: + + ```shell + ls /projected-volume/ + ``` + +## クリーンアップ + +PodおよびSecretを削除します: + +```shell +kubectl delete pod test-projected-volume +kubectl delete secret user pass +``` + +{{% /capture %}} + +{{% capture whatsnext %}} +* [`projected`](/docs/concepts/storage/volumes/#projected)ボリュームについてさらに学ぶ +* [all-in-oneボリューム](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/node/all-in-one-volume.md)のデザインドキュメントを読む +{{% /capture %}} diff --git a/content/ja/docs/tasks/configure-pod-container/share-process-namespace.md b/content/ja/docs/tasks/configure-pod-container/share-process-namespace.md index fde9fb5dc3a69..c24df13f1fa43 100644 --- a/content/ja/docs/tasks/configure-pod-container/share-process-namespace.md +++ b/content/ja/docs/tasks/configure-pod-container/share-process-namespace.md @@ -7,7 +7,7 @@ weight: 160 {{% capture overview %}} -{{< feature-state state="beta" >}} +{{< feature-state state="stable" for_k8s_version="v1.17" >}} このページでは、プロセス名前空間を共有するPodを構成する方法を示します。 プロセス名前空間の共有が有効になっている場合、コンテナ内のプロセスは、そのPod内の他のすべてのコンテナに表示されます。 @@ -20,9 +20,6 @@ weight: 160 {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -プロセス名前空間の共有は**ベータ**機能であり、デフォルトで有効になっています。 -`--feature-gates=PodShareProcessNamespace=false`を設定することで無効にできます。 - {{% /capture %}} {{% capture steps %}} diff --git a/content/ja/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md b/content/ja/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md index fd64462b6339b..406466cc1caa1 100644 --- a/content/ja/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md +++ b/content/ja/docs/tasks/debug-application-cluster/debug-pod-replication-controller.md @@ -13,7 +13,7 @@ content_template: templates/task {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} -* [Pod](/docs/concepts/workloads/pods/pod/)と[Podのライフサイクル](/docs/concepts/workloads/pods/pod-lifecycle/)の基本を理解している必要があります。 +* [Pod](/ja/docs/concepts/workloads/pods/pod/)と[Podのライフサイクル](/ja/docs/concepts/workloads/pods/pod-lifecycle/)の基本を理解している必要があります。 {{% /capture %}} diff --git a/content/ja/docs/tasks/debug-application-cluster/debug-service.md b/content/ja/docs/tasks/debug-application-cluster/debug-service.md index 494a349939f13..a59bef5a6a3b7 100644 --- a/content/ja/docs/tasks/debug-application-cluster/debug-service.md +++ b/content/ja/docs/tasks/debug-application-cluster/debug-service.md @@ -329,7 +329,7 @@ kubectl get service hostnames -o json * `targetPort`を名前で定義しようとしている場合、`Pod`は同じ名前でポートを公開していますか? * ポートの`protocol`は`Pod`のものと同じですか? -## ServiceにEndpointがあるか? +## ServiceにEndpointsがあるか? ここまで来たということは、`Service`は存在し、DNSによって名前解決できることが確認できているでしょう。 ここでは、実行した`Pod`が`Service`によって実際に選択されていることを確認しましょう。 @@ -347,7 +347,7 @@ hostnames-yp2kp 1/1 Running 0 1h "AGE"列は、これらの`Pod`が約1時間前のものであることを示しており、それらが正常に実行され、クラッシュしていないことを意味します。 `-l app=hostnames`引数はラベルセレクターで、ちょうど私たちの`Service`に定義されているものと同じです。 -Kubernetesシステム内には、すべての`Service`のセレクターを評価し、結果を`Endpoint`オブジェクトに保存するコントロールループがあります。 +Kubernetesシステム内には、すべての`Service`のセレクターを評価し、結果を`Endpoints`オブジェクトに保存するコントロールループがあります。 ```shell kubectl get endpoints hostnames @@ -355,7 +355,7 @@ NAME ENDPOINTS hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376 ``` -これにより、Endpointコントローラーが`Service`の正しい`Pod`を見つけていることを確認できます。 +これにより、Endpointsコントローラーが`Service`の正しい`Pod`を見つけていることを確認できます。 `hostnames`行が空白の場合、`Service`の`spec.selector`フィールドが実際に`Pod`の`metadata.labels`値を選択していることを確認する必要があります。 よくある間違いは、タイプミスまたは他のエラー、たとえば`Service`が`run=hostnames`を選択しているのに`Deployment`が`app=hostnames`を指定していることです。 @@ -379,7 +379,7 @@ u@pod$ wget -qO- 10.244.0.7:9376 hostnames-yp2kp ``` -`Endpoint`リスト内の各`Pod`は、それぞれの自身のホスト名を返すはずです。 +`Endpoints`リスト内の各`Pod`は、それぞれの自身のホスト名を返すはずです。 そうならない(または、あなた自身の`Pod`の正しい振る舞いにならない)場合は、そこで何が起こっているのかを調査する必要があります。 `kubectl logs`が役立つかもしれません。あるいは、`kubectl exec`で直接`Pod`にアクセスし、そこでサービスをチェックしましょう。 @@ -398,7 +398,7 @@ hostnames-632524106-tlaok 1/1 Running 0 2m ## kube-proxyは機能しているか? -ここに到達したのなら、`Service`は実行され、`Endpoint`があり、`Pod`が実際にサービスを提供しています。 +ここに到達したのなら、`Service`は実行され、`Endpoints`があり、`Pod`が実際にサービスを提供しています。 この時点で、`Service`のプロキシーメカニズム全体が疑わしいです。 ひとつひとつ確認しましょう。 @@ -579,7 +579,7 @@ UP BROADCAST RUNNING PROMISC MULTICAST MTU:1460 Metric:1 ## 助けを求める ここまでたどり着いたということは、とてもおかしなことが起こっています。 -`Service`は実行中で、`Endpoint`があり、`Pod`は実際にサービスを提供しています。 +`Service`は実行中で、`Endpoints`があり、`Pod`は実際にサービスを提供しています。 DNSは動作していて、`iptables`ルールがインストールされていて、`kube-proxy`も誤動作していないようです。 それでも、あなたの`Service`は機能していません。 おそらく私たちにお知らせ頂いた方がよいでしょう。調査をお手伝いします! diff --git a/content/ja/docs/tasks/run-application/run-replicated-stateful-application.md b/content/ja/docs/tasks/run-application/run-replicated-stateful-application.md index b76b8e0ada812..bd430ce44b2ac 100644 --- a/content/ja/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/content/ja/docs/tasks/run-application/run-replicated-stateful-application.md @@ -8,7 +8,7 @@ weight: 30 このページでは、[StatefulSet](/ja/docs/concepts/workloads/controllers/statefulset/) コントローラーを使用して、レプリカを持つステートフルアプリケーションを実行する方法を説明します。 -ここでの例は、非同期レプリケーションを行う複数のスレーブを持つ、単一マスターのMySQLです。 +ここでの例は、非同期レプリケーションを行う複数のスレーブを持つ、単一マスターのMySQLです。 **この例は本番環境向けの構成ではない**ことに注意してください。 具体的には、MySQLの設定が安全ではないデフォルトのままとなっています。 @@ -23,7 +23,7 @@ weight: 30 * このチュートリアルは、あなたが[PersistentVolume](/docs/concepts/storage/persistent-volumes/) と[StatefulSet](/ja/docs/concepts/workloads/controllers/statefulset/)、 さらには[Pod](/ja/docs/concepts/workloads/pods/pod/)、 - [Service](/docs/concepts/services-networking/service/)、 + [Service](/ja/docs/concepts/services-networking/service/)、 [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/)などの 他のコアな概念に精通していることを前提としています。 * MySQLに関する知識は記事の理解に役立ちますが、 @@ -76,7 +76,7 @@ kubectl apply -f https://k8s.io/examples/application/mysql/mysql-services.yaml {{< codenew file="application/mysql/mysql-services.yaml" >}} ヘッドレスサービスは、StatefulSetコントローラーが -StatefulSetの一部であるPodごとに作成するDNSエントリーのベースエントリーを提供します。 +StatefulSetの一部であるPodごとに作成するDNSエントリーのベースエントリーを提供します。 この例ではヘッドレスサービスの名前は`mysql`なので、同じKubernetesクラスタの 同じ名前空間内の他のPodは、`.mysql`を名前解決することでPodにアクセスできます。 diff --git a/content/ja/docs/tasks/run-application/run-single-instance-stateful-application.md b/content/ja/docs/tasks/run-application/run-single-instance-stateful-application.md index ce6e5643af365..c7efac3f8e855 100644 --- a/content/ja/docs/tasks/run-application/run-single-instance-stateful-application.md +++ b/content/ja/docs/tasks/run-application/run-single-instance-stateful-application.md @@ -168,9 +168,9 @@ PersistentVolumeを手動でプロビジョニングした場合は、Persistent {{% capture whatsnext %}} -* [Deploymentオブジェクト](/docs/concepts/workloads/controllers/deployment/)についてもっと学ぶ +* [Deploymentオブジェクト](/ja/docs/concepts/workloads/controllers/deployment/)についてもっと学ぶ -* [アプリケーションのデプロイ](/docs/user-guide/deploying-applications/)についてもっと学ぶ +* [アプリケーションのデプロイ](/ja/docs/tasks/run-application/run-stateless-application-deployment/)についてもっと学ぶ * [kubectl runのドキュメント](/docs/reference/generated/kubectl/kubectl-commands/#run) diff --git a/content/ja/docs/tasks/run-application/run-stateless-application-deployment.md b/content/ja/docs/tasks/run-application/run-stateless-application-deployment.md index c38b40171a2ac..dd4172138f266 100644 --- a/content/ja/docs/tasks/run-application/run-stateless-application-deployment.md +++ b/content/ja/docs/tasks/run-application/run-stateless-application-deployment.md @@ -32,7 +32,7 @@ weight: 10 ## nginx deploymentの作成と探検 -Kubernetes Deploymentオブジェクトを作成することでアプリケーションを実行できます。また、YAMLファイルでDeploymentを記述できます。例えば、このYAMLファイルはnginx:1.7.9 Dockerイメージを実行するデプロイメントを記述しています: +Kubernetes Deploymentオブジェクトを作成することでアプリケーションを実行できます。また、YAMLファイルでDeploymentを記述できます。例えば、このYAMLファイルはnginx:1.14.2 Dockerイメージを実行するデプロイメントを記述しています: {{< codenew file="application/deployment.yaml" >}} @@ -62,7 +62,7 @@ Kubernetes Deploymentオブジェクトを作成することでアプリケー Labels: app=nginx Containers: nginx: - Image: nginx:1.7.9 + Image: nginx:1.14.2 Port: 80/TCP Environment: Mounts: @@ -143,7 +143,7 @@ Deploymentを名前を指定して削除します: {{% capture whatsnext %}} -* [Deploymentオブジェクト](/docs/concepts/workloads/controllers/deployment/)の詳細 +* [Deploymentオブジェクト](/ja/docs/concepts/workloads/controllers/deployment/)の詳細 {{% /capture %}} diff --git a/content/ja/docs/tasks/service-catalog/install-service-catalog-using-helm.md b/content/ja/docs/tasks/service-catalog/install-service-catalog-using-helm.md new file mode 100644 index 0000000000000..cac6668f16818 --- /dev/null +++ b/content/ja/docs/tasks/service-catalog/install-service-catalog-using-helm.md @@ -0,0 +1,116 @@ +--- +title: Helmを使用したサービスカタログのインストール +content_template: templates/task +--- + +{{% capture overview %}} +{{< glossary_definition term_id="service-catalog" length="all" prepend="サービスカタログは" >}} + +[Helm](https://helm.sh/)を使用してKubernetesクラスターにサービスカタログをインストールします。手順の最新情報は[kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog/blob/master/docs/install.md)リポジトリーを参照してください。 + +{{% /capture %}} + + +{{% capture prerequisites %}} +* [サービスカタログ](/docs/concepts/service-catalog/)の基本概念を理解してください。 +* サービスカタログを使用するには、Kubernetesクラスターのバージョンが1.7以降である必要があります。 +* KubernetesクラスターのクラスターDNSを有効化する必要があります。 + * クラウド上のKubernetesクラスター、または{{< glossary_tooltip text="Minikube" term_id="minikube" >}}を使用している場合、クラスターDNSはすでに有効化されています。 + * `hack/local-up-cluster.sh`を使用している場合は、環境変数`KUBE_ENABLE_CLUSTER_DNS`が設定されていることを確認し、インストールスクリプトを実行してください。 +* [kubectlのインストールおよびセットアップ](/ja/docs/tasks/tools/install-kubectl/)を参考に、v1.7以降のkubectlをインストールし、設定を行ってください。 +* v2.7.0以降の[Helm](http://helm.sh/)をインストールしてください。 + * [Helm install instructions](https://helm.sh/docs/intro/install/)を参考にしてください。 + * 上記のバージョンのHelmをすでにインストールしている場合は、`helm init`を実行し、HelmのサーバーサイドコンポーネントであるTillerをインストールしてください。 + +{{% /capture %}} + + +{{% capture steps %}} +## Helmリポジトリーにサービスカタログを追加 + +Helmをインストールし、以下のコマンドを実行することでローカルマシンに*service-catalog*のHelmリポジトリーを追加します。 + + +```shell +helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com +``` + +以下のコマンドを実行し、インストールに成功していることを確認します。 + +```shell +helm search service-catalog +``` + +インストールが成功していれば、出力は以下のようになります: + +``` +NAME CHART VERSION APP VERSION DESCRIPTION +svc-cat/catalog 0.2.1 service-catalog API server and controller-manager helm chart +svc-cat/catalog-v0.2 0.2.2 service-catalog API server and controller-manager helm chart +``` + +## RBACの有効化 + +KubernetesクラスターのRBACを有効化することで、Tiller Podに`cluster-admin`アクセスを持たせます。 + +v0.25以前のMinikubeを使用している場合は、明示的にRBACを有効化して起動する必要があります: + +```shell +minikube start --extra-config=apiserver.Authorization.Mode=RBAC +``` + +v0.26以降のMinikubeを使用している場合は、以下のコマンドを実行してください。 + +```shell +minikube start +``` + +v0.26以降のMinikubeを使用している場合、`--extra-config`を指定しないでください。 +このフラグは--extra-config=apiserver.authorization-modeを指定するものに変更されており、現在MinikubeではデフォルトでRBACが有効化されています。 +古いフラグを指定すると、スタートコマンドが応答しなくなることがあります。 + +`hack/local-up-cluster.sh`を使用している場合、環境変数`AUTHORIZATION_MODE`を以下の値に設定してください: + +``` +AUTHORIZATION_MODE=Node,RBAC hack/local-up-cluster.sh -O +``` + +`helm init`は、デフォルトで`kube-system`のnamespaceにTiller Podをインストールし、Tillerは`default`のServiceAccountを使用するように設定されています。 + +{{< note >}} +`helm init`を実行する際に`--tiller-namespace`または`--service-account`のフラグを使用する場合、以下のコマンドの`--serviceaccount`フラグには適切なnamespaceとServiceAccountを指定する必要があります。 +{{< /note >}} + +Tillerに`cluster-admin`アクセスを設定する場合: + +```shell +kubectl create clusterrolebinding tiller-cluster-admin \ + --clusterrole=cluster-admin \ + --serviceaccount=kube-system:default +``` + + +## Kubernetesクラスターにサービスカタログをインストール + +以下のコマンドを使用して、Helmリポジトリーのrootからサービスカタログをインストールします: + +{{< tabs name="helm-versions" >}} +{{% tab name="Helm バージョン3" %}} +```shell +helm install catalog svc-cat/catalog --namespace catalog +``` +{{% /tab %}} +{{% tab name="Helm バージョン2" %}} +```shell +helm install svc-cat/catalog --name catalog --namespace catalog +``` +{{% /tab %}} +{{< /tabs >}} +{{% /capture %}} + + +{{% capture whatsnext %}} +* [sample service brokers](https://github.com/openservicebrokerapi/servicebroker/blob/master/gettingStarted.md#sample-service-brokers) +* [kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog) + +{{% /capture %}} diff --git a/content/ja/docs/tasks/tools/install-kubectl.md b/content/ja/docs/tasks/tools/install-kubectl.md index 451a64f3639f7..1e6bb6b3a5927 100644 --- a/content/ja/docs/tasks/tools/install-kubectl.md +++ b/content/ja/docs/tasks/tools/install-kubectl.md @@ -18,7 +18,7 @@ kubectlのバージョンは、クラスターのマイナーバージョンと {{% capture steps %}} -## Linuxへkubectlをインストールする +## Linuxへkubectlをインストールする {#install-kubectl-on-linux} ### curlを使用してLinuxへkubectlのバイナリをインストールする @@ -97,7 +97,7 @@ kubectl version {{< /tab >}} {{< /tabs >}} -## macOSへkubectlをインストールする +## macOSへkubectlをインストールする {#install-kubectl-on-macos} ### curlを使用してmacOSへkubectlのバイナリをインストールする @@ -170,7 +170,7 @@ macOSで[MacPorts](https://macports.org/)パッケージマネージャーを使 kubectl version ``` -## Windowsへkubectlをインストールする +## Windowsへkubectlをインストールする {#install-kubectl-on-windows} ### curlを使用してWindowsへkubectlのバイナリをインストールする @@ -467,7 +467,7 @@ compinit {{% /capture %}} {{% capture whatsnext %}} -* [Minikubeをインストールする](/docs/tasks/tools/install-minikube/) +* [Minikubeをインストールする](/ja/docs/tasks/tools/install-minikube/) * クラスターの作成に関する詳細を[スタートガイド](/docs/setup/)で確認する * [アプリケーションを起動して公開する方法を学ぶ](/docs/tasks/access-application-cluster/service-access-application-cluster/) * あなたが作成していないクラスターにアクセスする必要がある場合は、[クラスターアクセスドキュメントの共有](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)を参照してください diff --git a/content/ja/docs/tasks/tools/install-minikube.md b/content/ja/docs/tasks/tools/install-minikube.md index 52cc43a44ed73..fa98a198be184 100644 --- a/content/ja/docs/tasks/tools/install-minikube.md +++ b/content/ja/docs/tasks/tools/install-minikube.md @@ -65,7 +65,7 @@ Hyper-V Requirements: A hypervisor has been detected. Features required for ### kubectlのインストール kubectlがインストールされていることを確認してください。 -[kubectlのインストールとセットアップ](/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux)の指示に従ってkubectlをインストールできます。 +[kubectlのインストールとセットアップ](/ja/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux)の指示に従ってkubectlをインストールできます。 ### ハイパーバイザーのインストール @@ -110,7 +110,7 @@ sudo install minikube /usr/local/bin/ ### kubectlのインストール kubectlがインストールされていることを確認してください。 -[kubectlのインストールとセットアップ](/docs/tasks/tools/install-kubectl/#install-kubectl-on-macos)の指示に従ってkubectlをインストールできます。 +[kubectlのインストールとセットアップ](/ja/docs/tasks/tools/install-kubectl/#install-kubectl-on-macos)の指示に従ってkubectlをインストールできます。 ### ハイパーバイザーのインストール @@ -147,7 +147,7 @@ sudo mv minikube /usr/local/bin ### kubectlのインストール kubectlがインストールされていることを確認してください。 -[kubectlのインストールとセットアップ](/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows)の指示に従ってkubectlをインストールできます。 +[kubectlのインストールとセットアップ](/ja/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows)の指示に従ってkubectlをインストールできます。 ### ハイパーバイザーのインストール diff --git a/content/ja/docs/tutorials/_index.md b/content/ja/docs/tutorials/_index.md index f00c9e1b2e874..dfecf9f192b4a 100644 --- a/content/ja/docs/tutorials/_index.md +++ b/content/ja/docs/tutorials/_index.md @@ -21,7 +21,7 @@ content_template: templates/concept * [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) -* [Hello Minikube](/docs/tutorials/hello-minikube/) +* [Hello Minikube](/ja/docs/tutorials/hello-minikube/) ## 設定 diff --git a/content/ja/docs/tutorials/hello-minikube.md b/content/ja/docs/tutorials/hello-minikube.md index 40085fe2cb8f0..428175e21ee69 100644 --- a/content/ja/docs/tutorials/hello-minikube.md +++ b/content/ja/docs/tutorials/hello-minikube.md @@ -18,7 +18,7 @@ card: このチュートリアルでは、[Minikube](/docs/getting-started-guides/minikube)とKatacodaを使用して、Kubernetes上でシンプルなHello WorldのNode.jsアプリケーションを動かす方法を紹介します。Katacodaはブラウザで無償のKubernetes環境を提供します。 {{< note >}} -[Minikubeをローカルにインストール](/docs/tasks/tools/install-minikube/)している場合もこのチュートリアルを進めることが可能です。 +[Minikubeをローカルにインストール](/ja/docs/tasks/tools/install-minikube/)している場合もこのチュートリアルを進めることが可能です。 {{< /note >}} {{% /capture %}} @@ -65,7 +65,7 @@ card: ## Deploymentの作成 -Kubernetesの[*Pod*](/docs/concepts/workloads/pods/pod/) は、コンテナの管理やネットワーキングの目的でまとめられた、1つ以上のコンテナのグループです。このチュートリアルのPodがもつコンテナは1つのみです。Kubernetesの [*Deployment*](/docs/concepts/workloads/controllers/deployment/) はPodの状態を確認し、Podのコンテナが停止した場合には再起動します。DeploymentはPodの作成やスケールを管理するために推奨される方法(手段)です。 +Kubernetesの[*Pod*](/ja/docs/concepts/workloads/pods/pod/) は、コンテナの管理やネットワーキングの目的でまとめられた、1つ以上のコンテナのグループです。このチュートリアルのPodがもつコンテナは1つのみです。Kubernetesの [*Deployment*](/ja/docs/concepts/workloads/controllers/deployment/) はPodの状態を確認し、Podのコンテナが停止した場合には再起動します。DeploymentはPodの作成やスケールを管理するために推奨される方法(手段)です。 1. `kubectl create` コマンドを使用してPodを管理するDeploymentを作成してください。Podは提供されたDockerイメージを元にコンテナを実行します。 @@ -114,7 +114,7 @@ Kubernetesの[*Pod*](/docs/concepts/workloads/pods/pod/) は、コンテナの ## Serviceの作成 -通常、PodはKubernetesクラスタ内部のIPアドレスからのみアクセスすることができます。`hello-node`コンテナをKubernetesの仮想ネットワークの外部からアクセスするためには、Kubernetesの[*Service*](/docs/concepts/services-networking/service/)としてポッドを公開する必要があります。 +通常、PodはKubernetesクラスタ内部のIPアドレスからのみアクセスすることができます。`hello-node`コンテナをKubernetesの仮想ネットワークの外部からアクセスするためには、Kubernetesの[*Service*](/ja/docs/concepts/services-networking/service/)としてポッドを公開する必要があります。 1. `kubectl expose` コマンドを使用してPodをインターネットに公開します: @@ -257,8 +257,8 @@ minikube delete {{% capture whatsnext %}} -* [Deploymentオブジェクト](/docs/concepts/workloads/controllers/deployment/)について学ぶ. -* [アプリケーションのデプロイ](/docs/user-guide/deploying-applications/)について学ぶ. -* [Serviceオブジェクト](/docs/concepts/services-networking/service/)について学ぶ. +* [Deploymentオブジェクト](/ja/docs/concepts/workloads/controllers/deployment/)について学ぶ. +* [アプリケーションのデプロイ](/ja/docs/tasks/run-application/run-stateless-application-deployment/)について学ぶ. +* [Serviceオブジェクト](/ja/docs/concepts/services-networking/service/)について学ぶ. {{% /capture %}} diff --git a/content/ja/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/ja/docs/tutorials/kubernetes-basics/expose/expose-intro.html index dd7acb52ac4ce..c1ad09aa2db01 100644 --- a/content/ja/docs/tutorials/kubernetes-basics/expose/expose-intro.html +++ b/content/ja/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -28,7 +28,7 @@

目標

Kubernetes Serviceの概要

-

Kubernetes Podの寿命は永続的ではありません。実際、Podにはライフサイクルがあります。ワーカーのノードが停止すると、そのノードで実行されているPodも失われます。そうなると、ReplicaSetは、新しいPodを作成してアプリケーションを実行し続けるために、クラスターを動的に目的の状態に戻すことができます。別の例として、3つのレプリカを持つ画像処理バックエンドを考えます。それらのレプリカは交換可能です。フロントエンドシステムはバックエンドのレプリカを気にしたり、Podが失われて再作成されたとしても配慮すべきではありません。ただし、Kubernetesクラスター内の各Podは、同じノード上のPodであっても一意のIPアドレスを持っているため、アプリケーションが機能し続けるように、Pod間の変更を自動的に調整する方法が必要です。

+

Kubernetes Podの寿命は永続的ではありません。実際、Podにはライフサイクルがあります。ワーカーのノードが停止すると、そのノードで実行されているPodも失われます。そうなると、ReplicaSetは、新しいPodを作成してアプリケーションを実行し続けるために、クラスターを動的に目的の状態に戻すことができます。別の例として、3つのレプリカを持つ画像処理バックエンドを考えます。それらのレプリカは交換可能です。フロントエンドシステムはバックエンドのレプリカを気にしたり、Podが失われて再作成されたとしても配慮すべきではありません。ただし、Kubernetesクラスター内の各Podは、同じノード上のPodであっても一意のIPアドレスを持っているため、アプリケーションが機能し続けるように、Pod間の変更を自動的に調整する方法が必要です。

KubernetesのServiceは、Podの論理セットと、それらにアクセスするためのポリシーを定義する抽象概念です。Serviceによって、依存Pod間の疎結合が可能になります。Serviceは、すべてのKubernetesオブジェクトのように、YAML(推奨)またはJSONを使って定義されます。Serviceが対象とするPodのセットは通常、LabelSelectorによって決定されます(なぜ仕様にセレクタを含めずにServiceが必要になるのかについては下記を参照してください)。

diff --git a/content/ja/docs/tutorials/kubernetes-basics/scale/scale-intro.html b/content/ja/docs/tutorials/kubernetes-basics/scale/scale-intro.html index d943c46e11be2..602fd843e50f1 100644 --- a/content/ja/docs/tutorials/kubernetes-basics/scale/scale-intro.html +++ b/content/ja/docs/tutorials/kubernetes-basics/scale/scale-intro.html @@ -27,7 +27,7 @@

目標

アプリケーションのスケーリング

-

前回のモジュールでは、Deploymentを作成し、それをService経由で公開しました。該当のDeploymentでは、アプリケーションを実行するためのPodを1つだけ作成しました。トラフィックが増加した場合、ユーザーの需要に対応するためにアプリケーションをスケールする必要があります。

+

前回のモジュールでは、Deploymentを作成し、それをService経由で公開しました。該当のDeploymentでは、アプリケーションを実行するためのPodを1つだけ作成しました。トラフィックが増加した場合、ユーザーの需要に対応するためにアプリケーションをスケールする必要があります。

スケーリングは、Deploymentのレプリカの数を変更することによって実現可能です。

diff --git a/content/ja/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/ja/docs/tutorials/stateless-application/expose-external-ip-address.md index 8248027d61389..74d973fdf2340 100644 --- a/content/ja/docs/tutorials/stateless-application/expose-external-ip-address.md +++ b/content/ja/docs/tutorials/stateless-application/expose-external-ip-address.md @@ -13,7 +13,7 @@ weight: 10 {{% capture prerequisites %}} - * [kubectl](/docs/tasks/tools/install-kubectl/)をインストールしてください。 + * [kubectl](/ja/docs/tasks/tools/install-kubectl/)をインストールしてください。 * Kubernetesクラスターを作成する際に、Google Kubernetes EngineやAmazon Web Servicesのようなクラウドプロバイダーを使用します。このチュートリアルでは、クラウドプロバイダーを必要とする[外部ロードバランサー](/docs/tasks/access-application-cluster/create-external-load-balancer/)を作成します。 @@ -44,7 +44,7 @@ kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml ``` -上記のコマンドにより、[Deployment](/docs/concepts/workloads/controllers/deployment/)オブジェクトを作成し、[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)オブジェクトを関連づけます。ReplicaSetには5つの[Pod](/docs/concepts/workloads/pods/pod/)があり、それぞれHello Worldアプリケーションが起動しています。 +上記のコマンドにより、[Deployment](/ja/docs/concepts/workloads/controllers/deployment/)オブジェクトを作成し、[ReplicaSet](/ja/docs/concepts/workloads/controllers/replicaset/)オブジェクトを関連づけます。ReplicaSetには5つの[Pod](/ja/docs/concepts/workloads/pods/pod/)があり、それぞれHello Worldアプリケーションが起動しています。 1. Deploymentに関する情報を表示します: diff --git a/content/ja/examples/application/deployment-scale.yaml b/content/ja/examples/application/deployment-scale.yaml index 3bdc7b6f5b8e3..68801c971deb8 100644 --- a/content/ja/examples/application/deployment-scale.yaml +++ b/content/ja/examples/application/deployment-scale.yaml @@ -14,6 +14,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.8 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ja/examples/application/deployment-update.yaml b/content/ja/examples/application/deployment-update.yaml index 8c683d6dc776e..18e8be65fbd71 100644 --- a/content/ja/examples/application/deployment-update.yaml +++ b/content/ja/examples/application/deployment-update.yaml @@ -14,6 +14,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.8 # Update the version of nginx from 1.7.9 to 1.8 + image: nginx:1.16.1 # Update the version of nginx from 1.14.2 to 1.16.1 ports: - containerPort: 80 diff --git a/content/ja/examples/application/deployment.yaml b/content/ja/examples/application/deployment.yaml index 0f526b16c0ad2..2cd599218d01e 100644 --- a/content/ja/examples/application/deployment.yaml +++ b/content/ja/examples/application/deployment.yaml @@ -14,6 +14,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ja/examples/application/nginx-app.yaml b/content/ja/examples/application/nginx-app.yaml index c3f926b74e752..d00682e1fcbba 100644 --- a/content/ja/examples/application/nginx-app.yaml +++ b/content/ja/examples/application/nginx-app.yaml @@ -29,6 +29,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ja/examples/application/nginx/nginx-deployment.yaml b/content/ja/examples/application/nginx/nginx-deployment.yaml index f05bfa3c5f557..7f608bc47fa5a 100644 --- a/content/ja/examples/application/nginx/nginx-deployment.yaml +++ b/content/ja/examples/application/nginx/nginx-deployment.yaml @@ -14,6 +14,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ja/examples/application/simple_deployment.yaml b/content/ja/examples/application/simple_deployment.yaml index 10fa1ddf29999..d9c74af8c577b 100644 --- a/content/ja/examples/application/simple_deployment.yaml +++ b/content/ja/examples/application/simple_deployment.yaml @@ -14,6 +14,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ja/examples/application/update_deployment.yaml b/content/ja/examples/application/update_deployment.yaml index d53aa3e6d2fc8..2d7603acb956c 100644 --- a/content/ja/examples/application/update_deployment.yaml +++ b/content/ja/examples/application/update_deployment.yaml @@ -13,6 +13,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.11.9 # update the image + image: nginx:1.16.1 # update the image ports: - containerPort: 80 diff --git a/content/ja/examples/controllers/nginx-deployment.yaml b/content/ja/examples/controllers/nginx-deployment.yaml index 5dd80da371f54..685c17aa68e1d 100644 --- a/content/ja/examples/controllers/nginx-deployment.yaml +++ b/content/ja/examples/controllers/nginx-deployment.yaml @@ -16,6 +16,6 @@ spec: spec: containers: - name: nginx - image: nginx:1.15.4 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ja/examples/pods/simple-pod.yaml b/content/ja/examples/pods/simple-pod.yaml index 4208f4b36536d..0e79d8a3c6128 100644 --- a/content/ja/examples/pods/simple-pod.yaml +++ b/content/ja/examples/pods/simple-pod.yaml @@ -5,6 +5,6 @@ metadata: spec: containers: - name: nginx - image: nginx:1.7.9 + image: nginx:1.14.2 ports: - containerPort: 80 diff --git a/content/ko/_index.html b/content/ko/_index.html index b93ac38e8f13b..7b7dd9e059cea 100644 --- a/content/ko/_index.html +++ b/content/ko/_index.html @@ -45,12 +45,12 @@

150+ 마이크로서비스를 쿠버네티스로 마이그레이션하는


- Attend KubeCon in Amsterdam on Mar. 30-Apr. 2, 2020 + Attend KubeCon in Amsterdam in July/August TBD



- Attend KubeCon in Shanghai on July 28-30, 2020 + Attend KubeCon in Boston on November 17-20, 2020

diff --git a/content/ko/docs/concepts/_index.md b/content/ko/docs/concepts/_index.md index c6e7556312130..03a1d64ddd0f2 100644 --- a/content/ko/docs/concepts/_index.md +++ b/content/ko/docs/concepts/_index.md @@ -41,7 +41,7 @@ weight: 40 * [데몬 셋](/ko/docs/concepts/workloads/controllers/daemonset/) * [스테이트풀 셋](/ko/docs/concepts/workloads/controllers/statefulset/) * [레플리카 셋](/ko/docs/concepts/workloads/controllers/replicaset/) -* [잡](/docs/concepts/workloads/controllers/jobs-run-to-completion/) +* [잡](/ko/docs/concepts/workloads/controllers/jobs-run-to-completion/) ## 쿠버네티스 컨트롤 플레인 diff --git a/content/ko/docs/concepts/architecture/master-node-communication.md b/content/ko/docs/concepts/architecture/master-node-communication.md index 597f375d8c810..7e70cffde284e 100644 --- a/content/ko/docs/concepts/architecture/master-node-communication.md +++ b/content/ko/docs/concepts/architecture/master-node-communication.md @@ -74,7 +74,7 @@ apiserver는 kubelet의 제공 인증서를 확인하지 않는데, 플래그를 이용한다 그것이 불가능한 경우, 신뢰할 수 없는 또는 공인 네트워크에 대한 연결을 피하고 싶다면, -apiserver와 kubelet 사이에 [SSH 터널링](/docs/concepts/architecture/master-node-communication/#ssh-터널)을 +apiserver와 kubelet 사이에 [SSH 터널링](/ko/docs/concepts/architecture/master-node-communication/#ssh-터널)을 사용한다. 마지막으로, kubelet API를 안전하게 하기 위해 diff --git a/content/ko/docs/concepts/architecture/nodes.md b/content/ko/docs/concepts/architecture/nodes.md index a94a81a0cbdc9..423004b27efee 100644 --- a/content/ko/docs/concepts/architecture/nodes.md +++ b/content/ko/docs/concepts/architecture/nodes.md @@ -128,6 +128,7 @@ ready 컨디션의 상태가 [kube-controller-manager](/docs/admin/kube-controll `metadata.name` 필드를 근거로 상태 체크를 수행하여 노드의 유효성을 확인한다. 노드가 유효하면, 즉 모든 필요한 서비스가 동작 중이면, 파드를 동작시킬 자격이 된다. 그렇지 않으면, 유효하게 될때까지 어떠한 클러스터 활동에 대해서도 무시된다. +노드 오브젝트의 이름은 유효한 [DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. {{< note >}} 쿠버네티스는 유효하지 않은 노드로부터 오브젝트를 보호하고 유효한 상태로 이르는지 확인하기 위해 지속적으로 체크한다. diff --git a/content/ko/docs/concepts/cluster-administration/cluster-administration-overview.md b/content/ko/docs/concepts/cluster-administration/cluster-administration-overview.md index 03c7bc919b177..2ae19a29614b1 100644 --- a/content/ko/docs/concepts/cluster-administration/cluster-administration-overview.md +++ b/content/ko/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -31,7 +31,7 @@ weight: 10 ## 클러스터 관리 -* [클러스터 관리](/docs/tasks/administer-cluster/cluster-management/)는 클러스터의 라이프사이클과 관련된 몇 가지 주제를 설명한다. 이는 새 클러스터 생성, 마스터와 워커노드 업그레이드, 노드 유지보수 실행 (예: 커널 업그레이드), 그리고 동작 중인 클러스터의 쿠버네티스 API 버전 업그레이드 등을 포함한다. +* [클러스터 관리](/ko/docs/tasks/administer-cluster/cluster-management/)는 클러스터의 라이프사이클과 관련된 몇 가지 주제를 설명한다. 이는 새 클러스터 생성, 마스터와 워커노드 업그레이드, 노드 유지보수 실행 (예: 커널 업그레이드), 그리고 동작 중인 클러스터의 쿠버네티스 API 버전 업그레이드 등을 포함한다. * 어떻게 [노드 관리](/ko/docs/concepts/architecture/nodes/)를 하는지 배워보자. diff --git a/content/ko/docs/concepts/cluster-administration/federation.md b/content/ko/docs/concepts/cluster-administration/federation.md deleted file mode 100644 index 7d7fb35a0ae18..0000000000000 --- a/content/ko/docs/concepts/cluster-administration/federation.md +++ /dev/null @@ -1,186 +0,0 @@ ---- -title: 페더레이션 -content_template: templates/concept -weight: 80 ---- - -{{% capture overview %}} - -{{< deprecationfilewarning >}} -{{< include "federation-deprecation-warning-note.md" >}} -{{< /deprecationfilewarning >}} - -이 페이지는 여러 쿠버네티스 클러스터를 페더레이션을 통해서 관리해야 하는 이유와 방법을 -설명한다. -{{% /capture %}} - -{{% capture body %}} -## 페더레이션 이유 - -페더레이션을 사용하면 여러 클러스터를 쉽게 관리할 수 있다. 이는 2가지 주요 빌딩 블록을 -제공함으로써 이루어진다. - - * 클러스터 간의 리소스 동기화: 페더레이션은 여러 클러스터 내의 리소스를 - 동기화하는 능력을 제공한다. 예를 들면, 여러 클러스터에 동일한 디플로이먼트가 존재하는 것을 확인 할 수 있다. - * 클러스터 간의 디스커버리: 페더레이션은 모든 클러스터의 백엔드에 DNS 서버 및 로드벨런서를 자동 구성하는 능력을 제공한다. 예를 들면, 글로벌 VIP 또는 DNS 기록이 여러 클러스터의 백엔드 엑세스에 사용될 수 있는 것을 확인할 수 있다. - -페더레이션이 가능하게 하는 다른 사례는 다음과 같다. - -* 고가용성: 클러스터에 걸쳐서 부하를 분산하고 DNS - 서버와 로드벨런서를 자동 구성함으로써, 페더레이션은 클러스터 장애의 영향을 - 최소화한다. -* 공급자 락인(lock-in) 회피: 애플리케이션의 클러스터 간 마이그레이션을 쉽게 - 만듦으로써, 페더레이션은 클러스터 공급자의 락인을 방지한다. - - -여러 클러스터를 운영하는 경우가 아니면 페더레이션은 필요 없다. 여러 클러스터가 필요한 -이유의 일부는 다음과 같다. - -* 짧은 지연시간: 클러스터가 여러 지역(region)에 있으면 사용자에게 가장 가까운 클러스터로부터 - 서비스함으로써 지연시간을 최소화한다. -* 결함 격리: 하나의 큰 클러스터보다 여러 개의 작은 클러스터를 사용하는 것이 - 결함을 격리하는데 더 효과적이다(예를 들면, 클라우드 - 공급자의 다른 가용 영역(availability zone)에 있는 여러 클러스터). -* 확장성: 단일 쿠버네티스 클러스터는 확장성에 한계가 있다(일반적인 - 사용자에게 해당되는 사항은 아니다. 더 자세한 내용: - [쿠버네티스 스케일링 및 성능 목표](https://git.k8s.io/community/sig-scalability/goals.md)). -* [하이브리드 클라우드](#하이브리드-클라우드-역량): 다른 클라우드 공급자나 온-프레미스 데이터 센터에 있는 여러 클러스터를 - 운영할 수 있다. - -### 주의 사항 - -페더레이션에는 매력적인 사례가 많지만, 다소 주의 해야 할 -사항도 있다. - -* 네트워크 대역폭과 비용 증가: 페더레이션 컨트롤 플레인은 모든 클러스터를 - 감시하여 현재 상태가 예정된 상태와 같은지 확인한다. 이것은 클러스터들이 - 한 클라우드 제공자의 여러 다른 지역에서 또는 클라우드 제공자 간에 걸쳐 동작하는 - 경우 상당한 네트워크 비용을 초래할 수 있다. -* 클러스터 간 격리 수준 감소: 페더레이션 컨트롤 플레인에서의 오류는 모든 클러스터에 - 영향을 줄 수 있다. 이것은 페더레이션 컨트롤 플레인의 논리를 최소한으로 - 유지함으로써 완화된다. 페더레이션은 가능한 경우 언제라도 - 쿠버네티스 클러스터에 컨트롤 플레인을 위임한다. 페더레이션은 안전성을 제공하고 - 여러 클러스터의 중단을 방지할 수 있도록 민감하게 설계 및 구현되었다. -* 성숙도: 페더레이션 프로젝트는 상대적으로 신규 프로젝트이고 성숙도가 높지 않다. - 모든 리소스가 이용 가능한 상태는 아니며 많은 리소스가 아직 알파 상태이다. [이슈 - 88](https://github.com/kubernetes/federation/issues/88)은 팀이 해결 - 중에 있는 시스템의 알려진 이슈를 열거하고 있다. - -### 하이브리드 클라우드 역량 - -쿠버네티스 클러스터의 페더레이션은 다른 클라우드 제공자(예를 들어, Google 클라우드, AWS), -그리고 온-프레미스(예를 들어, OpenStack)에서 동작 중인 클러스터를 포함할 수 -있다. [Kubefed](/docs/tasks/federation/set-up-cluster-federation-kubefed/)는 연합된 클러스터 배치에 권장되는 방법이다. - -그 후에, [API 리소스](#api-리소스)는 서로 다른 클러스터와 클라우드 -제공자에 걸쳐 확장될 수 있다. - -## 페더레이션 설치 - -여러 클러스터의 페더레이션 구성을 위해서는, 페더레이션 컨트롤 플레인을 우선적으로 -설치해야 한다. -페더레이션 컨트롤 플레인의 설치를 위해서는 [설치 가이드](/docs/tutorials/federation/set-up-cluster-federation-kubefed/) -를 따른다. - -## API 리소스 - -컨트롤 플레인이 설치되고 나면, 페더레이션 API 리소스 생성을 시작할 수 -있다. -다음의 가이드는 일부 리소스에 대해서 자세히 설명한다. - -* [클러스터](/docs/tasks/federation/administer-federation/cluster/) -* [컨피그 맵](/docs/tasks/federation/administer-federation/configmap/) -* [데몬 셋](/docs/tasks/federation/administer-federation/daemonset/) -* [디플로이먼트](/docs/tasks/federation/administer-federation/deployment/) -* [이벤트](/docs/tasks/federation/administer-federation/events/) -* [Hpa](/docs/tasks/federation/administer-federation/hpa/) -* [인그레스](/docs/tasks/federation/administer-federation/ingress/) -* [잡](/docs/tasks/federation/administer-federation/job/) -* [네임스페이스](/docs/tasks/federation/administer-federation/namespaces/) -* [레플리카 셋](/docs/tasks/federation/administer-federation/replicaset/) -* [시크릿](/docs/tasks/federation/administer-federation/secret/) -* [서비스](/docs/concepts/cluster-administration/federation-service-discovery/) - - -[API 참조 문서](/docs/reference/federation/)는 페더레이션 -apiserver가 지원하는 모든 리소스를 열거한다. - -## 삭제 캐스케이딩(cascading) - -쿠버네티스 버전 1.6은 연합된 리소스에 대한 삭제 캐스케이딩을 -지원한다. 삭제 케스케이딩이 적용된 경우, 페더레이션 컨트롤 플레인에서 -리소스를 삭제하면, 모든 클러스터에서 상응하는 리소스가 삭제된다. - -REST API 사용하는 경우 삭제 캐스케이딩이 기본으로 활성화되지 않는다. 그것을 -활성화하려면, REST API를 사용하여 페더레이션 컨트롤 플레인에서 리소스를 삭제할 때 -`DeleteOptions.orphanDependents=false` 옵션을 설정한다. `kubectl -delete`를 사용하면 -삭제 캐스케이딩이 기본으로 활성화된다. `kubectl -delete --cascade=false`를 실행하여 비활성화할 수 있다. - -참고: 쿠버네티스 버전 1.5는 페더레이션 리소스의 부분 집합에 대한 삭제 -캐스케이딩을 지원하였다. - -## 단일 클러스터의 범위 - -Google Compute Engine 또는 Amazon Web Services와 같은 IaaS 제공자에서는, VM이 -[영역(zone)](https://cloud.google.com/compute/docs/zones) 또는 [가용 영역(availability -zone)](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html)에 존재한다. -다음과 같은 이유로, 쿠버네티스 클러스터의 모든 VM을 동일한 가용 영역에 두는 것을 추천한다. - - - 단일 글로벌 쿠버네티스 클러스터에 비해서, 장애에 대한 단일-포인트가 더 적다. - - 여러 가용 영역에 걸친 클러스터에 비해서, 단일-영역 클러스터의 가용성 속성에 대한 추론이 - 더 쉽다. - - 쿠버네티스 개발자가 시스템을 디자인할 때(예를 들어, 지연 시간, 대역폭, 연관된 장애를 - 고려할 때) 모든 기계가 단일 데이터 센터에 있거나 밀접하게 연결되어 있다고 가정하고 있다. - -가용 영역당 더 많은 VM이 포함되는 적은 수의 클러스터 실행을 추천한다. 다만, 여러 가용 영역 마다 여러 클러스터의 실행도 가능하다. - -가용 영역당 더 적은 수의 클러스터가 선호되는 이유는 다음과 같다. - - - 한 클러스터에 많은 노드가 있는 일부의 경우 파드의 빈 패킹(bin packing)이 향상됨(리소스 단편화 감소). - - 운영 오버헤드 감소(운영 툴과 프로세스의 성숙도에 의해 해당 장점은 반감되는 측면이 있음). - - apiserver VMs와 같이, 클러스터당 비용이 고정된 리소스의 비용을 감소(그러나 중간 규모 부터 큰 규모에 이르는 클러스터의 - 전체 클러스터 비용에 비하면 상대적으로 적은 비용). - -여러 클러스터가 필요한 이유는 다음을 포함한다. - - - 다른 업무의 계층으로부터 특정 계층의 격리가 요구되는 엄격한 보안 정책(다만, 아래의 클러스터 분할하기 정보를 확인하기 - 바람). - - 새로운 쿠버네티스 릴리스 또는 다른 클러스터 소프트웨어를 카나리아(canary) 방식으로 릴리스하기 위해서 클러스터를 테스트. - -## 적절한 클러스터 수 선택하기 - -쿠버네티스 클러스터의 수를 선택하는 것은 상대적으로 고정적인 선택이며, 가끔식만 재고된다. -대조적으로, 클러스터의 노드 수와 서비스 내의 파드 수는 부하와 규모 증가에 따라 -빈번하게 변경될 수 있다. - -클러스터의 수를 선택하기 위해서, 첫 번째로, 쿠버네티스에서 동작할 서비스의 모든 최종 사용자에게 적절한 지연 시간을 제공할 수 있는 지역들을 선택할 -필요가 있다(만약 콘텐츠 전송 네트워크를 사용한다면, CDN-호스트된 콘텐츠의 지연 시간 요구사항 -고려할 필요가 없음). 법적인 이슈 또한 이것에 영향을 줄 수 있다. 예를 들면, 어떤 글로벌 고객 기반의 회사는 US, AP, SA 지역 등 특정 지역에서 클러스터를 운영하도록 결정할 수도 있다. -지역의 수를 `R`이라 부르자. - -두 번째로, 여전히 사용 가능한 상태에서, 얼마나 많은 클러스터가 동시에 사용할 수 없는 상태가 될 수 있는지 결정한다. -사용하지 않는 상태가 될 수 있는 수를 `U`라고 하자. 만약이 값에 확신이 없다면, 1이 괜찮은 선택이다. - -클러스터 장애 상황에서 어느 지역으로든지 직접적인 트래픽에 대한 로드밸런싱이 허용된다면, `R` -또는 적어도 `U + 1` 이상의 클러스터가 있으면 된다. 만약 그렇지 않다면(예를 들어, 클러스터 장애 상황에서 모든 -사용자에 대한 낮은 지연 시간을 유지하고 싶다면), `R * (U + 1)`(각 `R` 지역 내에 `U + 1`) -클러스터가 필요하다. 어느 경우든지, 각 클러스터는 다른 영역에 배치하도록 노력하는 것이 좋다. - -마지막으로, 클러스터 중 어느 클러스터라도 쿠버네티스 클러스터에서 추천되는 최대 노드 수 보다 더 많은 노드가 필요하다면, -더 많은 클러스터가 필요할 것이다. 쿠버네티스 v1.3은 클러스터를 최대 1000노드까지 지원한다. 쿠버네티스 v1.8은 -클러스터를 최대 5000 노드까지 지원한다. 더 자세한 가이드는 [대규모 클러스터 구축하기](/docs/setup/best-practices/cluster-large/)에서 확인 가능하다. - -{{% /capture %}} - -{{% capture whatsnext %}} -* [페더레이션 - 제안](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/design-proposals/multicluster/federation.md)에 대해 더 학습하기. -* 클러스터 페더레이션 [설치 가이드](/docs/tutorials/federation/set-up-cluster-federation-kubefed/) 보기. -* [Kubecon2016 페더레이션 발표](https://www.youtube.com/watch?v=pq9lbkmxpS8) 보기 -* [Kubecon2017 유럽 페더레이션 업데이트 내용](https://www.youtube.com/watch?v=kwOvOLnFYck) 보기 -* [Kubecon2018 유럽 sig-multicluster 업데이트 내용](https://www.youtube.com/watch?v=vGZo5DaThQU) 보기 -* [Kubecon2018 유럽 Federation-v2 프로토타입 발표](https://youtu.be/q27rbaX5Jis?t=7m20s) 보기 -* [Federation-v2 사용자 가이드](https://github.com/kubernetes-sigs/federation-v2/blob/master/docs/userguide.md) 보기 -{{% /capture %}} diff --git a/content/ko/docs/concepts/cluster-administration/proxies.md b/content/ko/docs/concepts/cluster-administration/proxies.md index 66e29d2e40ac4..3b8b2d32a1bdb 100644 --- a/content/ko/docs/concepts/cluster-administration/proxies.md +++ b/content/ko/docs/concepts/cluster-administration/proxies.md @@ -14,7 +14,7 @@ weight: 90 쿠버네티스를 이용할 때에 사용할 수 있는 여러 프락시가 있다. -1. [kubectl proxy](/docs/tasks/access-application-cluster/access-cluster/#directly-accessing-the-rest-api): +1. [kubectl proxy](/ko/docs/tasks/access-application-cluster/access-cluster/#rest-api에-직접-액세스): - 사용자의 데스크탑이나 파드 안에서 실행한다. - 로컬 호스트 주소에서 쿠버네티스의 API 서버로 프락시한다. @@ -23,7 +23,7 @@ weight: 90 - API 서버를 찾는다. - 인증 헤더를 추가한다. -1. [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#discovering-builtin-services): +1. [apiserver proxy](/ko/docs/tasks/access-application-cluster/access-cluster/#빌트인-서비스들의-발견): - API 서버에 내장된 요새(bastion)이다. - 클러스터 외부의 사용자가 도달할 수 없는 클러스터 IP 주소로 연결한다. diff --git a/content/ko/docs/concepts/configuration/assign-pod-node.md b/content/ko/docs/concepts/configuration/assign-pod-node.md index 1c818457cb86d..027a7d38d378e 100644 --- a/content/ko/docs/concepts/configuration/assign-pod-node.md +++ b/content/ko/docs/concepts/configuration/assign-pod-node.md @@ -107,7 +107,8 @@ spec: `nodeSelector` 는 파드를 특정 레이블이 있는 노드로 제한하는 매우 간단한 방법을 제공한다. 어피니티/안티-어피니티 기능은 표현할 수 있는 제약 종류를 크게 확장한다. 주요 개선 사항은 다음과 같다. -1. 언어가 보다 표현적이다("AND 또는 정확한 일치" 만이 아니다). +1. 어피니티/안티-어피니티 언어가 더 표현적이다. 언어는 논리 연산자인 AND 연산으로 작성된 + 정확한 매칭 항목 이외에 더 많은 매칭 규칙을 제공한다. 2. 규칙이 엄격한 요구 사항이 아니라 "유연한(soft)"/"선호(preference)" 규칙을 나타낼 수 있기에 스케줄러가 규칙을 만족할 수 없더라도, 파드가 계속 스케줄 되도록 한다. 3. 노드 자체에 레이블을 붙이기보다는 노드(또는 다른 토폴로지 도메인)에서 실행 중인 다른 파드의 레이블을 제한할 수 있다. @@ -155,9 +156,9 @@ spec: `nodeSelector` 와 `nodeAffinity` 를 모두 지정한다면 파드가 후보 노드에 스케줄 되기 위해서는 *둘 다* 반드시 만족해야 한다. -`nodeAffinity` 유형과 연관된 `nodeSelectorTerms` 를 지정하면, 파드를 `nodeSelectorTerms` 가 지정된 것 중 **한 가지**라도 만족하는 노드에 스케줄할 수 있다. +`nodeAffinity` 유형과 연관된 `nodeSelectorTerms` 를 지정하면, 파드는 `nodeSelectorTerms` 를 **모두** 만족하는 노드에만 스케줄할 수 있다. -`nodeSelectorTerms` 와 연관된 여러 `matchExpressions` 를 지정하면, 파드는 `matchExpressions` 를 **모두** 만족하는 노드에만 스케줄할 수 있다. +`nodeSelectorTerms` 와 연관된 여러 `matchExpressions` 를 지정하면, 파드는 `matchExpressions` 이 지정된 것 중 **한 가지**라도 만족하는 노드에만 스케줄할 수 있다. 파드가 스케줄 된 노드의 레이블을 지우거나 변경해도 파드는 제거되지 않는다. 다시 말해서 어피니티 선택은 파드를 스케줄링 하는 시점에만 작동한다. @@ -224,7 +225,7 @@ spec: 1. 어피니티와 `requiredDuringSchedulingIgnoredDuringExecution` 파드 안티-어피니티는 대해 `topologyKey` 가 비어있는 것을 허용하지 않는다. 2. `requiredDuringSchedulingIgnoredDuringExecution` 파드 안티-어피니티에서 `topologyKey` 를 `kubernetes.io/hostname` 로 제한하기 위해 어드미션 컨트롤러 `LimitPodHardAntiAffinityTopology` 가 도입되었다. 사용자 지정 토폴로지를에 사용할 수 있도록 하려면, 어드미션 컨트롤러를 수정하거나 간단히 이를 비활성화 할 수 있다. -3. `preferredDuringSchedulingIgnoredDuringExecution` 파드 안티-어피니티의 경우 빈 `topologyKey` 는 "all topology"("all topology"는 현재 `kubernetes.io/hostname`, `failure-domain.beta.kubernetes.io/zone` 그리고 `failure-domain.beta.kubernetes.io/region` 의 조합으로 제한된다)로 해석한다. +3. `preferredDuringSchedulingIgnoredDuringExecution` 파드 안티-어피니티는 `topologyKey` 가 비어있는 것을 허용하지 않는다. 4. 위의 경우를 제외하고, `topologyKey` 는 적법한 어느 레이블-키도 가능하다. `labelSelector` 와 `topologyKey` 외에도 `labelSelector` 와 일치해야 하는 네임스페이스 목록 `namespaces` 를 @@ -345,7 +346,7 @@ web-server-1287567482-s330j 1/1 Running 0 7m 10.192.3 위의 예시에서 `topologyKey:"kubernetes.io/hostname"` 과 함께 `PodAntiAffinity` 규칙을 사용해서 두 개의 인스터스가 동일한 호스트에 있지 않도록 redis 클러스터를 배포한다. 같은 기술을 사용해서 고 가용성을 위해 안티-어피니티로 구성된 스테이트풀셋의 예시는 -[ZooKeeper 튜토리얼](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)을 본다. +[ZooKeeper 튜토리얼](/ko/docs/tutorials/stateful-application/zookeeper/#노드-실패-방지)을 본다. ## nodeName diff --git a/content/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig.md b/content/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig.md index d5ffe7a508163..d24d749d267f4 100644 --- a/content/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig.md +++ b/content/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig.md @@ -23,7 +23,7 @@ kubeconfig 파일들을 사용하여 클러스터, 사용자, 네임스페이스 다른 kubeconfig 파일을 사용할 수 있다. kubeconfig 파일을 생성하고 지정하는 단계별 지시사항은 -[다중 클러스터로 접근 구성하기](/docs/tasks/access-application-cluster/configure-access-multiple-clusters)를 참조한다. +[다중 클러스터로 접근 구성하기](/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)를 참조한다. {{% /capture %}} @@ -97,7 +97,7 @@ kubectl config view 두 번째 파일의 `red-user` 하위에 충돌하지 않는 항목이 있어도 버린다. `KUBECONFIG` 환경 변수 설정의 예로, - [KUBECONFIG 환경 변수 설정](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)를 참조한다. + [KUBECONFIG 환경 변수 설정](/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#kubeconfig-환경-변수-설정)를 참조한다. 그렇지 않다면, 병합하지 않고 기본 kubecofig 파일인 `$HOME/.kube/config`를 사용한다. @@ -148,7 +148,7 @@ kubeconfig 파일에서 파일과 경로 참조는 kubeconfig 파일의 위치 {{% capture whatsnext %}} -* [다중 클러스터 접근 구성하기](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) +* [다중 클러스터 접근 구성하기](/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) * [`kubectl config`](/docs/reference/generated/kubectl/kubectl-commands#config) {{% /capture %}} diff --git a/content/ko/docs/concepts/configuration/overview.md b/content/ko/docs/concepts/configuration/overview.md index 1bcb7362d6593..7611be8cb67d2 100644 --- a/content/ko/docs/concepts/configuration/overview.md +++ b/content/ko/docs/concepts/configuration/overview.md @@ -32,7 +32,7 @@ weight: 10 - 가능하다면 단독 파드(즉, [레플리카 셋](/ko/docs/concepts/workloads/controllers/replicaset/)이나 [디플로이먼트](/ko/docs/concepts/workloads/controllers/deployment/)에 연결되지 않은 파드)를 사용하지 않는다. 단독 파드는 노드 장애 이벤트가 발생해도 다시 스케줄링되지 않는다. - 명백하게 [`restartPolicy: Never`](/ko/docs/concepts/workloads/pods/pod-lifecycle/#재시작-정책)를 사용하는 상황을 제외한다면, 의도한 파드의 수가 항상 사용 가능한 상태를 유지하는 레플리카 셋을 생성하고, 파드를 교체하는 전략([롤링 업데이트](/ko/docs/concepts/workloads/controllers/deployment/#디플로이먼트-롤링-업데이트)와 같은)을 명시하는 디플로이먼트는 파드를 직접 생성하기 위해 항상 선호되는 방법이다. [잡](/docs/concepts/workloads/controllers/jobs-run-to-completion/) 또한 적절할 수 있다. + 명백하게 [`restartPolicy: Never`](/ko/docs/concepts/workloads/pods/pod-lifecycle/#재시작-정책)를 사용하는 상황을 제외한다면, 의도한 파드의 수가 항상 사용 가능한 상태를 유지하는 레플리카 셋을 생성하고, 파드를 교체하는 전략([롤링 업데이트](/ko/docs/concepts/workloads/controllers/deployment/#디플로이먼트-롤링-업데이트)와 같은)을 명시하는 디플로이먼트는 파드를 직접 생성하기 위해 항상 선호되는 방법이다. [잡](/ko/docs/concepts/workloads/controllers/jobs-run-to-completion/) 또한 적절할 수 있다. ## 서비스 diff --git a/content/ko/docs/concepts/containers/images.md b/content/ko/docs/concepts/containers/images.md index 615e84dc40c38..4a84219ff6472 100644 --- a/content/ko/docs/concepts/containers/images.md +++ b/content/ko/docs/concepts/containers/images.md @@ -6,7 +6,7 @@ weight: 10 {{% capture overview %}} -사용자 Docker 이미지를 생성하고 레지스트리에 푸시(push)하여 쿠버네티스 파드에서 참조되기 이전에 대비한다. +사용자 Docker 이미지를 생성하고 레지스트리에 푸시(push)하여 쿠버네티스 파드에서 참조되기 이전에 대비한다. 컨테이너의 `image` 속성은 `docker` 커맨드에서 지원하는 문법과 같은 문법을 지원한다. 이는 프라이빗 레지스트리와 태그를 포함한다. @@ -17,8 +17,8 @@ weight: 10 ## 이미지 업데이트 -기본 풀(pull) 정책은 `IfNotPresent`이며, 이것은 Kubelet이 이미 -존재하는 이미지에 대한 풀을 생략하게 한다. 만약 항상 풀을 강제하고 싶다면, +기본 풀(pull) 정책은 `IfNotPresent`이며, 이것은 Kubelet이 이미 +존재하는 이미지에 대한 풀을 생략하게 한다. 만약 항상 풀을 강제하고 싶다면, 다음 중 하나를 수행하면 된다. - 컨테이너의 `imagePullPolicy`를 `Always`로 설정. @@ -35,7 +35,7 @@ Docker CLI는 현재 `docker manifest` 커맨드와 `create`, `annotate`, `push` 다음에서 docker 문서를 확인하기 바란다. https://docs.docker.com/edge/engine/reference/commandline/manifest/ -이것을 사용하는 방법에 대한 예제는 빌드 하니스(harness)에서 참조한다. +이것을 사용하는 방법에 대한 예제는 빌드 하니스(harness)에서 참조한다. https://cs.k8s.io/?q=docker%20manifest%20(create%7Cpush%7Cannotate)&i=nope&files=&repos= 이 커맨드는 Docker CLI에 의존하며 그에 전적으로 구현된다. `$HOME/.docker/config.json` 편집 및 `experimental` 키를 `enabled`로 설정하거나, CLI 커맨드 호출 시 간단히 `DOCKER_CLI_EXPERIMENTAL` 환경 변수를 `enabled`로만 설정해도 된다. @@ -79,9 +79,9 @@ Docker *18.06 또는 그 이상* 을 사용하길 바란다. 더 낮은 버전 ### Google 컨테이너 레지스트리 사용 -쿠버네티스는 Google 컴퓨트 엔진(GCE)에서 동작할 때, [Google 컨테이너 -레지스트리(GCR)](https://cloud.google.com/tools/container-registry/)를 자연스럽게 -지원한다. 사용자의 클러스터가 GCE 또는 Google 쿠버네티스 엔진에서 동작 중이라면, 간단히 +쿠버네티스는 Google 컴퓨트 엔진(GCE)에서 동작할 때, [Google 컨테이너 +레지스트리(GCR)](https://cloud.google.com/tools/container-registry/)를 자연스럽게 +지원한다. 사용자의 클러스터가 GCE 또는 Google 쿠버네티스 엔진에서 동작 중이라면, 간단히 이미지의 전체 이름(예: gcr.io/my_project/image:tag)을 사용하면 된다. 클러스터 내에서 모든 파드는 해당 레지스트리에 있는 이미지에 읽기 접근 권한을 가질 것이다. @@ -95,10 +95,10 @@ GCR을 인증할 것이다. 인스턴스의 서비스 계정은 쿠버네티스는 노드가 AWS EC2 인스턴스일 때, [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/)를 자연스럽게 지원한다. -간단히 이미지의 전체 이름(예: `ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag`)을 +간단히 이미지의 전체 이름(예: `ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag`)을 파드 정의에 사용하면 된다. -파드를 생성할 수 있는 클러스터의 모든 사용자는 ECR 레지스트리에 있는 어떠한 +파드를 생성할 수 있는 클러스터의 모든 사용자는 ECR 레지스트리에 있는 어떠한 이미지든지 파드를 실행하는데 사용할 수 있다. kubelet은 ECR 자격 증명을 가져오고 주기적으로 갱신할 것이다. 이것을 위해서는 다음에 대한 권한이 필요하다. @@ -127,12 +127,12 @@ kubelet은 ECR 자격 증명을 가져오고 주기적으로 갱신할 것이다 - `aws_credentials.go:116] Got ECR credentials from ECR API for .dkr.ecr..amazonaws.com` ### Azure 컨테이너 레지스트리(ACR) 사용 -[Azure 컨테이너 레지스트리](https://azure.microsoft.com/en-us/services/container-registry/)를 사용하는 경우 +[Azure 컨테이너 레지스트리](https://azure.microsoft.com/en-us/services/container-registry/)를 사용하는 경우 관리자 역할의 사용자나 서비스 주체(principal) 중 하나를 사용하여 인증할 수 있다. -어느 경우라도, 인증은 표준 Docker 인증을 통해서 수행된다. 이러한 지침은 +어느 경우라도, 인증은 표준 Docker 인증을 통해서 수행된다. 이러한 지침은 [azure-cli](https://github.com/azure/azure-cli) 명령줄 도구 사용을 가정한다. -우선 레지스트리를 생성하고 자격 증명을 만들어야한다. 이에 대한 전체 문서는 +우선 레지스트리를 생성하고 자격 증명을 만들어야한다. 이에 대한 전체 문서는 [Azure 컨테이너 레지스트리 문서](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli)에서 찾을 수 있다. 컨테이너 레지스트리를 생성하고 나면, 다음의 자격 증명을 사용하여 로그인한다. @@ -142,7 +142,7 @@ kubelet은 ECR 자격 증명을 가져오고 주기적으로 갱신할 것이다 * `DOCKER_REGISTRY_SERVER`: `${some-registry-name}.azurecr.io` * `DOCKER_EMAIL`: `${some-email-address}` -해당 변수에 대한 값을 채우고 나면 +해당 변수에 대한 값을 채우고 나면 [쿠버네티스 시크릿을 구성하고 그것을 파드 디플로이를 위해서 사용](/ko/docs/concepts/containers/images/#파드에-imagepullsecrets-명시)할 수 있다. ### IBM 클라우드 컨테이너 레지스트리 사용 @@ -159,13 +159,13 @@ Google 쿠버네티스 엔진에서 동작 중이라면, 이미 각 노드에 Go {{< /note >}} {{< note >}} -AWS EC2에서 동작 중이고 EC2 컨테이너 레지스트리(ECR)을 사용 중이라면, 각 노드의 kubelet은 +AWS EC2에서 동작 중이고 EC2 컨테이너 레지스트리(ECR)을 사용 중이라면, 각 노드의 kubelet은 ECR 로그인 자격 증명을 관리하고 업데이트할 것이다. 그렇다면 이 방법은 쓸 수 없다. {{< /note >}} {{< note >}} -이 방법은 노드의 구성을 제어할 수 있는 경우에만 적합하다. 이 방법은 -GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에 대해서는 신뢰성 있게 작동하지 +이 방법은 노드의 구성을 제어할 수 있는 경우에만 적합하다. 이 방법은 +GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에 대해서는 신뢰성 있게 작동하지 않을 것이다. {{< /note >}} @@ -174,7 +174,7 @@ GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에 {{< /note >}} -Docker는 프라이빗 레지스트리를 위한 키를 `$HOME/.dockercfg` 또는 `$HOME/.docker/config.json` 파일에 저장한다. 만약 동일한 파일을 +Docker는 프라이빗 레지스트리를 위한 키를 `$HOME/.dockercfg` 또는 `$HOME/.docker/config.json` 파일에 저장한다. 만약 동일한 파일을 아래의 검색 경로 리스트에 넣으면, kubelete은 이미지를 풀 할 때 해당 파일을 자격 증명 공급자로 사용한다. * `{--root-dir:-/var/lib/kubelet}/config.json` @@ -190,11 +190,11 @@ Docker는 프라이빗 레지스트리를 위한 키를 `$HOME/.dockercfg` 또 아마도 kubelet을 위한 사용자의 환경 파일에 `HOME=/root`을 명시적으로 설정해야 할 것이다. {{< /note >}} -프라이빗 레지스트리를 사용도록 사용자의 노드를 구성하기 위해서 권장되는 단계는 다음과 같다. 이 +프라이빗 레지스트리를 사용도록 사용자의 노드를 구성하기 위해서 권장되는 단계는 다음과 같다. 이 예제의 경우, 사용자의 데스크탑/랩탑에서 아래 내용을 실행한다. 1. 사용하고 싶은 각 자격 증명 세트에 대해서 `docker login [서버]`를 실행한다. 이것은 `$HOME/.docker/config.json`를 업데이트한다. - 1. 편집기에서 `$HOME/.docker/config.json`를 보고 사용하고 싶은 자격 증명만 포함하고 있는지 확인한다. + 1. 편집기에서 `$HOME/.docker/config.json`를 보고 사용하고 싶은 자격 증명만 포함하고 있는지 확인한다. 1. 노드의 리스트를 구한다. 예를 들면 다음과 같다. - 이름을 원하는 경우: `nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')` - IP를 원하는 경우: `nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')` @@ -241,11 +241,11 @@ kubectl describe pods/private-image-test-1 | grep 'Failed' ``` -클러스터의 모든 노드가 반드시 동일한 `.docker/config.json`를 가져야 한다. 그렇지 않으면, 파드가 -일부 노드에서만 실행되고 다른 노드에서는 실패할 것이다. 예를 들어, 노드 오토스케일링을 사용한다면, 각 인스턴스 +클러스터의 모든 노드가 반드시 동일한 `.docker/config.json`를 가져야 한다. 그렇지 않으면, 파드가 +일부 노드에서만 실행되고 다른 노드에서는 실패할 것이다. 예를 들어, 노드 오토스케일링을 사용한다면, 각 인스턴스 템플릿은 `.docker/config.json`을 포함하거나 그것을 포함한 드라이브를 마운트해야 한다. -프라이빗 레지스트리 키가 `.docker/config.json`에 추가되고 나면 모든 파드는 +프라이빗 레지스트리 키가 `.docker/config.json`에 추가되고 나면 모든 파드는 프라이빗 레지스트리의 이미지에 읽기 접근 권한을 가지게 될 것이다. ### 미리 내려받은 이미지 @@ -255,16 +255,16 @@ Google 쿠버네티스 엔진에서 동작 중이라면, 이미 각 노드에 Go {{< /note >}} {{< note >}} -이 방법은 노드의 구성을 제어할 수 있는 경우에만 적합하다. 이 방법은 -GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에 대해서는 신뢰성 있게 작동하지 +이 방법은 노드의 구성을 제어할 수 있는 경우에만 적합하다. 이 방법은 +GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에 대해서는 신뢰성 있게 작동하지 않을 것이다. {{< /note >}} 기본적으로, kubelet은 지정된 레지스트리에서 각 이미지를 풀 하려고 할 것이다. -그러나, 컨테이너의 `imagePullPolicy` 속성이 `IfNotPresent` 또는 `Never`으로 설정되어 있다면, +그러나, 컨테이너의 `imagePullPolicy` 속성이 `IfNotPresent` 또는 `Never`으로 설정되어 있다면, 로컬 이미지가 사용된다(우선적으로 또는 배타적으로). -레지스트리 인증의 대안으로 미리 풀 된 이미지에 의존하고 싶다면, +레지스트리 인증의 대안으로 미리 풀 된 이미지에 의존하고 싶다면, 클러스터의 모든 노드가 동일한 미리 내려받은 이미지를 가지고 있는지 확인해야 한다. 이것은 특정 이미지를 속도를 위해 미리 로드하거나 프라이빗 레지스트리에 대한 인증의 대안으로 사용될 수 있다. @@ -274,7 +274,7 @@ GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에 ### 파드에 ImagePullSecrets 명시 {{< note >}} -이 방법은 현재 Google 쿠버네티스 엔진, GCE 및 노드 생성이 자동화된 모든 클라우드 제공자에게 +이 방법은 현재 Google 쿠버네티스 엔진, GCE 및 노드 생성이 자동화된 모든 클라우드 제공자에게 권장된다. {{< /note >}} @@ -288,10 +288,10 @@ GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에 kubectl create secret docker-registry --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL ``` -만약 Docker 자격 증명 파일이 이미 존재한다면, 위의 명령을 사용하지 않고, +만약 Docker 자격 증명 파일이 이미 존재한다면, 위의 명령을 사용하지 않고, 자격 증명 파일을 쿠버네티스 시크릿으로 가져올 수 있다. [기존 Docker 자격 증명으로 시크릿 생성](/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials)에서 관련 방법을 설명하고 있다. -`kubectl create secret docker-registry`는 +`kubectl create secret docker-registry`는 하나의 개인 레지스트리에서만 작동하는 시크릿을 생성하기 때문에, 여러 개인 컨테이너 레지스트리를 사용하는 경우 특히 유용하다. @@ -302,7 +302,7 @@ kubectl create secret docker-registry --docker-server=DOCKER_REGISTRY_SER #### 파드의 imagePullSecrets 참조 -이제, `imagePullSecrets` 섹션을 파드의 정의에 추가함으로써 해당 시크릿을 +이제, `imagePullSecrets` 섹션을 파드의 정의에 추가함으로써 해당 시크릿을 참조하는 파드를 생성할 수 있다. ```shell @@ -328,38 +328,38 @@ EOF 이것은 프라이빗 레지스트리를 사용하는 각 파드에 대해서 수행될 필요가 있다. -그러나, 이 필드의 셋팅은 [서비스 어카운트](/docs/user-guide/service-accounts) 리소스에 +그러나, 이 필드의 셋팅은 [서비스 어카운트](/docs/user-guide/service-accounts) 리소스에 imagePullSecrets을 셋팅하여 자동화할 수 있다. 자세한 지침을 위해서는 [서비스 어카운트에 ImagePullSecrets 추가](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)를 확인한다. -이것은 노드 당 `.docker/config.json`와 함께 사용할 수 있다. 자격 증명은 +이것은 노드 당 `.docker/config.json`와 함께 사용할 수 있다. 자격 증명은 병합될 것이다. 이 방법은 Google 쿠버네티스 엔진에서 작동될 것이다. ### 유스케이스 -프라이빗 레지스트리를 구성하기 위한 많은 솔루션이 있다. 다음은 여러 가지 -일반적인 유스케이스와 제안된 솔루션이다. +프라이빗 레지스트리를 구성하기 위한 많은 솔루션이 있다. 다음은 여러 가지 +일반적인 유스케이스와 제안된 솔루션이다. 1. 비소유 이미지(예를 들어, 오픈소스)만 실행하는 클러스터의 경우. 이미지를 숨길 필요가 없다. - Docker hub의 퍼블릭 이미지를 사용한다. - 설정이 필요 없다. - - GCE 및 Google 쿠버네티스 엔진에서는, 속도와 가용성 향상을 위해서 로컬 미러가 자동적으로 사용된다. -1. 모든 클러스터 사용자에게는 보이지만, 회사 외부에는 숨겨야하는 일부 독점 이미지를 + - GCE 및 Google 쿠버네티스 엔진에서는, 속도와 가용성 향상을 위해서 로컬 미러가 자동적으로 사용된다. +1. 모든 클러스터 사용자에게는 보이지만, 회사 외부에는 숨겨야하는 일부 독점 이미지를 실행하는 클러스터의 경우. - 호스트 된 프라이빗 [Docker 레지스트리](https://docs.docker.com/registry/)를 사용한다. - 그것은 [Docker Hub](https://hub.docker.com/signup)에 호스트 되어 있거나, 다른 곳에 되어 있을 것이다. - 위에 설명된 바와 같이 수동으로 .docker/config.json을 구성한다. - 또는, 방화벽 뒤에서 읽기 접근 권한을 가진 내부 프라이빗 레지스트리를 실행한다. - - 쿠버네티스 구성은 필요 없다. + - 쿠버네티스 구성은 필요 없다. - 또는, GCE 및 Google 쿠버네티스 엔진에서는, 프로젝트의 Google 컨테이너 레지스트리를 사용한다. - 그것은 수동 노드 구성에 비해서 클러스터 오토스케일링과 더 잘 동작할 것이다. - 또는, 노드의 구성 변경이 불편한 클러스터에서는, `imagePullSecrets`를 사용한다. 1. 독점 이미지를 가진 클러스터로, 그 중 일부가 더 엄격한 접근 제어를 필요로 하는 경우. - [AlwaysPullImages 어드미션 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)가 활성화되어 있는지 확인한다. 그렇지 않으면, 모든 파드가 잠재적으로 모든 이미지에 접근 권한을 가진다. - - 민감한 데이터는 이미지 안에 포장하는 대신, "시크릿" 리소스로 이동한다. + - 민감한 데이터는 이미지 안에 포장하는 대신, "시크릿" 리소스로 이동한다. 1. 멀티-테넌트 클러스터에서 각 테넌트가 자신의 프라이빗 레지스트리를 필요로 하는 경우. - [AlwaysPullImages 어드미션 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)가 활성화되어 있는지 확인한다. 그렇지 않으면, 모든 파드가 잠재적으로 모든 이미지에 접근 권한을 가진다. - - 인가가 요구되도록 프라이빗 레지스트리를 실행한다. + - 인가가 요구되도록 프라이빗 레지스트리를 실행한다. - 각 테넌트에 대한 레지스트리 자격 증명을 생성하고, 시크릿에 넣고, 각 테넌트 네임스페이스에 시크릿을 채운다. - 테넌트는 해당 시크릿을 각 네임스페이스의 imagePullSecrets에 추가한다. diff --git a/content/ko/docs/concepts/containers/runtime-class.md b/content/ko/docs/concepts/containers/runtime-class.md index 2ec1499dc6a6d..befcefcfe4294 100644 --- a/content/ko/docs/concepts/containers/runtime-class.md +++ b/content/ko/docs/concepts/containers/runtime-class.md @@ -79,6 +79,9 @@ metadata: handler: myconfiguration # 상응하는 CRI 설정의 이름임 ``` +런타임 클래스 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)어이야 한다. + {{< note >}} 런타임 클래스 쓰기 작업(create/update/patch/delete)은 클러스터 관리자로 제한할 것을 권장한다. 이것은 일반적으로 기본 설정이다. diff --git a/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md b/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md index c7a2054d427d3..f8db7a20062d6 100644 --- a/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md +++ b/content/ko/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md @@ -7,24 +7,28 @@ weight: 10 {{% capture overview %}} 애그리게이션 레이어는 코어 쿠버네티스 API가 제공하는 기능 이외에 더 많은 기능을 제공할 수 있도록 추가 API를 더해 쿠버네티스를 확장할 수 있게 해준다. +추가 API는 [서비스-카탈로그](/docs/concepts/extend-kubernetes/service-catalog/)와 같이 미리 만들어진 솔루션이거나 사용자가 직접 개발한 API일 수 있다. + +애그리게이션 레이어는 [사용자 정의 리소스](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)와는 다르며, 애그리게이션 레이어는 {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} 가 새로운 종류의 오브젝트를 인식하도록 하는 방법이다. {{% /capture %}} {{% capture body %}} -## 개요 - -애그리게이션 레이어는 부가적인 쿠버네티스-스타일 API를 클러스터에 설치할 수 있게 해준다. 이는 [서비스-카탈로그](https://github.com/kubernetes-incubator/service-catalog/blob/master/README.md)와 같이 사전에 구축되어 있는 서드 파티 솔루션일 수 있고, [apiserver-builder](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/README.md)로 시작해볼 수 있는 것과 같은 사용자 정의 API일 수도 있다. +## 애그리게이션 레이어 애그리게이션 레이어는 kube-apiserver 프로세스 안에서 구동된다. 확장 리소스가 등록되기 전까지, 애그리게이션 레이어는 아무 일도 하지 않는다. API를 등록하기 위해서, 사용자는 쿠버네티스 API 내에서 URL 경로를 "요구하는(claim)" APIService 오브젝트를 추가해야 한다. 이때, 애그리게이션 레이어는 해당 API 경로(예: /apis/myextensions.mycompany.io/v1/...)로 전송되는 모든 것을 등록된 APIService로 프록시하게 된다. -대개, APIService는 클러스터 내에서 구동 중인 파드(pod) 내 *extension-apiserver* 로 구현된다. 이 extension-apiserver는 일반적으로 추가된 리소스에 대한 적극적인 관리가 필요한 경우 하나 이상의 컨트롤러와 짝지어진다. 결과적으로, apiserver-builder는 실제로 그 둘 모두에 대한 스켈레톤을 제공한다. 또 다른 예로, 서비스-카탈로그가 설치된 경우에는, 제공하는 서비스에 대한 extension-apiserver와 컨트롤러를 모두 제공한다. +APIService를 구현하는 가장 일반적인 방법은 클러스터 내에 실행되고 있는 파드에서 *extension API server* 를 실행하는 것이다. extension API server를 사용해서 클러스터의 리소스를 관리하는 경우 extension API server("extension-apiserver" 라고도 한다)는 일반적으로 하나 이상의 {{< glossary_tooltip text="컨트롤러" term_id="controller" >}}와 쌍을 이룬다. apiserver-builder 라이브러리는 extension API server와 연관된 컨틀로러에 대한 스켈레톤을 제공한다. + +### 응답 레이턴시 Extension-apiserver는 kube-apiserver로 오가는 연결의 레이턴시가 낮아야 한다. -특히, kube-apiserver로 부터의 디스커버리 요청은 왕복 레이턴시가 5초 이내여야 한다. -사용자의 환경에서 달성할 수 없는 경우에는, 이를 어떻게 바꿀 수 있을지 고려해야 한다. 지금은, -`EnableAggregatedDiscoveryTimeout=false` 기능 게이트를 설정해서 타임아웃 제한을 -비활성화 할 수 있다. 이 기능은 미래의 릴리스에서는 삭제될 예정이다. +kube-apiserver로 부터의 디스커버리 요청은 왕복 레이턴시가 5초 이내여야 한다. + +extention API server가 레이턴시 요구 사항을 달성할 수 없는 경우 이를 충족할 수 있도록 변경하는 것을 고려한다. +`EnableAggregatedDiscoveryTimeout=false` [기능 게이트](/docs/reference/command-line-tools-reference/feature-gates/)를 설정해서 타임아웃 +제한을 비활성화 할 수 있다. 이 사용 중단(deprecated)된 기능 게이트는 향후 릴리스에서 제거될 예정이다. {{% /capture %}} @@ -33,6 +37,6 @@ Extension-apiserver는 kube-apiserver로 오가는 연결의 레이턴시가 낮 * 사용자의 환경에서 Aggregator를 동작시키려면, [애그리게이션 레이어를 설정한다](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/). * 다음에, [extension api-server를 구성해서](/docs/tasks/access-kubernetes-api/setup-extension-api-server/) 애그리게이션 레이어와 연계한다. * 또한, 어떻게 [쿠버네티스 API를 커스텀 리소스 데피니션으로 확장하는지](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/)를 배워본다. +* [API 서비스](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#apiservice-v1-apiregistration-k8s-io)의 사양을 읽어본다. {{% /capture %}} - diff --git a/content/ko/docs/concepts/overview/components.md b/content/ko/docs/concepts/overview/components.md index dae6876343bfb..d45adac408895 100644 --- a/content/ko/docs/concepts/overview/components.md +++ b/content/ko/docs/concepts/overview/components.md @@ -21,12 +21,12 @@ card: {{% /capture %}} {{% capture body %}} -## 컨트롤 플래인 컴포넌트 +## 컨트롤 플레인 컴포넌트 -컨트롤 플래인 컴포넌트는 클러스터에 관한 전반적인 결정(예를 들어, 스케줄링)을 수행하고 클러스터 이벤트(예를 들어, 디플로이먼트의 `replicas` 필드에 대한 요구조건을 충족되지 않을 경우 새로운 {{< glossary_tooltip text="파드" term_id="pod">}}를 구동시키는 것)를 감지하고 반응한다. +컨트롤 플레인 컴포넌트는 클러스터에 관한 전반적인 결정(예를 들어, 스케줄링)을 수행하고 클러스터 이벤트(예를 들어, 디플로이먼트의 `replicas` 필드에 대한 요구 조건이 충족되지 않을 경우 새로운 {{< glossary_tooltip text="파드" term_id="pod">}}를 구동시키는 것)를 감지하고 반응한다. -컨트롤 플래인 컴포넌트는 클러스터 내 어떠한 머신에서든지 동작 될 수 있다. 그러나 -간결성을 위하여, 구성 스크립트는 보통 동일 머신 상에 모든 컨트롤 플래인 컴포넌트를 구동시키고, +컨트롤 플레인 컴포넌트는 클러스터 내 어떠한 머신에서든지 동작할 수 있다. 그러나 +간결성을 위하여, 구성 스크립트는 보통 동일 머신 상에 모든 컨트롤 플레인 컴포넌트를 구동시키고, 사용자 컨테이너는 해당 머신 상에 동작시키지 않는다. 다중-마스터-VM 설치 예제를 보려면 [고가용성 클러스터 구성하기](/docs/admin/high-availability/)를 확인해본다. @@ -49,7 +49,7 @@ card: 이들 컨트롤러는 다음을 포함한다. * 노드 컨트롤러: 노드가 다운되었을 때 통지와 대응에 관한 책임을 가진다. - * 레플리케이션 컨트롤러: 시스템의 모든 레플리케이션 컨트롤러 오브젝트에 대해 알맞는 수의 파드들을 + * 레플리케이션 컨트롤러: 시스템의 모든 레플리케이션 컨트롤러 오브젝트에 대해 알맞은 수의 파드들을 유지시켜 주는 책임을 가진다. * 엔드포인트 컨트롤러: 엔드포인트 오브젝트를 채운다(즉, 서비스와 파드를 연결시킨다.) * 서비스 어카운트 & 토큰 컨트롤러: 새로운 네임스페이스에 대한 기본 계정과 API 접근 토큰을 생성한다. @@ -60,14 +60,14 @@ card: cloud-controller-manager는 클라우드-제공사업자-특유 컨트롤러 루프만을 동작시킨다. 이 컨트롤러 루프는 kube-controller-manager에서 비활성 시켜야만 한다. kube-controller-manager를 구동시킬 때 `--cloud-provider` 플래그를 `external`로 설정함으로써 이 컨트롤러 루프를 비활성 시킬 수 있다. -cloud-controller-manager는 클라우드 밴더 코드와 쿠버네티스 코드가 서로 독립적으로 발전시켜 나갈 수 있도록 해준다. 이전 릴리스에서는, 코어 쿠버네티스 코드가 기능상으로 클라우드-제공사업자-특유 코드에 대해 의존적이었다. 향후 릴리스에서, 클라우드 밴더에 따른 코드는 클라우드 밴더 자체에 의해 유지되도록 하여야만 하며, 쿠버네티스가 동작하는 동안 cloud-controller-manager에 연계되도록 하여야만 한다. +cloud-controller-manager는 클라우드 벤더 코드와 쿠버네티스 코드가 서로 독립적으로 발전시켜 나갈 수 있도록 해준다. 이전 릴리스에서는 코어 쿠버네티스 코드가 기능상으로 클라우드-제공사업자-특유 코드에 대해 의존적이었다. 향후 릴리스에서 클라우드 벤더만의 코드는 클라우드 벤더가 유지해야 하며, 쿠버네티스가 동작하는 동안 cloud-controller-manager에 연계되도록 해야 한다. 다음 컨트롤러들은 클라우드 제공사업자의 의존성을 갖는다. - * 노드 컨트롤러: 노드가 응답을 멈추고 나서 클라우드 상에서 삭제되어 졌는지 확정하기 위해 클라우드 제공사업자에게 확인하는 것 - * 라우트 컨트롤러: 바탕을 이루는 클라우드 인프라에 경로를 구성하는 것 + * 노드 컨트롤러: 노드가 응답을 멈춘 후 클라우드 상에서 삭제되었는지 판별하기 위해 클라우드 제공사업자에게 확인하는 것 + * 라우트 컨트롤러: 기본 클라우드 인프라에 경로를 구성하는 것 * 서비스 컨트롤러: 클라우드 제공사업자 로드밸런서를 생성, 업데이트 그리고 삭제하는 것 - * 볼륨 컨트롤러: 볼륨의 생성, 부착 그리고 마운트 하는 것과 볼륨을 조정하기 위해 클라우드 제공사업자와 상호작용하는 것 + * 볼륨 컨트롤러: 볼륨의 생성, 연결 그리고 마운트 하는 것과 오케스트레이션하기 위해 클라우드 제공사업자와 상호작용하는 것 ## 노드 컴포넌트 @@ -97,7 +97,7 @@ cloud-controller-manager는 클라우드 밴더 코드와 쿠버네티스 코드 ### DNS -여타 애드온들이 절대적으로 요구되지 않지만, 많은 예시에서 그것을 필요로 하기 때문에 모든 쿠버네티스 클러스터는 [클러스터 DNS](/ko/docs/concepts/services-networking/dns-pod-service/)를 갖추어야만 한다. +여타 애드온들이 절대적으로 요구되지 않지만, 많은 예시에서 필요로 하기 때문에 모든 쿠버네티스 클러스터는 [클러스터 DNS](/ko/docs/concepts/services-networking/dns-pod-service/)를 갖추어야만 한다. 클러스터 DNS는 구성환경 내 다른 DNS 서버와 더불어, 쿠버네티스 서비스를 위해 DNS 레코드를 제공해주는 DNS 서버다. @@ -105,12 +105,12 @@ cloud-controller-manager는 클라우드 밴더 코드와 쿠버네티스 코드 ### 웹 UI (대시보드) -[대시보드](/ko/docs/tasks/access-application-cluster/web-ui-dashboard/)는 쿠버네티스 클러스터를 위한 범용의 웹 기반 UI다. 사용자가 클러스터 자체뿐만 아니라, 클러스터에서 동작하는 애플리케이션에 대한 관리와 고장처리를 할 수 있도록 허용해준다. +[대시보드](/ko/docs/tasks/access-application-cluster/web-ui-dashboard/)는 쿠버네티스 클러스터를 위한 범용의 웹 기반 UI다. 사용자가 클러스터 자체뿐만 아니라, 클러스터에서 동작하는 애플리케이션에 대한 관리와 문제 해결을 할 수 있도록 해준다. ### 컨테이너 리소스 모니터링 [컨테이너 리소스 모니터링](/ko/docs/tasks/debug-application-cluster/resource-usage-monitoring/)은 -중앙 데이터베이스 내에 컨테이너들에 대한 포괄적인 시계열 매트릭스를 기록하고 그 데이터를 열람하기 위한 UI를 제공해 준다. +중앙 데이터베이스 내의 컨테이너들에 대한 포괄적인 시계열 매트릭스를 기록하고 그 데이터를 열람하기 위한 UI를 제공해 준다. ### 클러스터-레벨 로깅 diff --git a/content/ko/docs/concepts/overview/kubernetes-api.md b/content/ko/docs/concepts/overview/kubernetes-api.md index bc38340964cc9..ce0b55e1136b7 100644 --- a/content/ko/docs/concepts/overview/kubernetes-api.md +++ b/content/ko/docs/concepts/overview/kubernetes-api.md @@ -17,7 +17,7 @@ API에 원격 접속하는 방법은 [Controlling API Access doc](/docs/referenc 쿠버네티스 API는 시스템을 위한 선언적 설정 스키마를 위한 기초가 되기도 한다. [kubectl](/docs/reference/kubectl/overview/) 커맨드라인 툴을 사용해서 API 오브젝트를 생성, 업데이트, 삭제 및 조회할 수 있다. -쿠버네티스는 또한 API 리소스에 대해 직렬화된 상태를 (현재는 [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)에) 저장한다. +쿠버네티스는 또한 API 리소스에 대해 직렬화된 상태를 (현재는 [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)에) 저장한다. 쿠버네티스 자체는 여러 컴포넌트로 나뉘어져서 각각의 API를 통해 상호작용한다. @@ -28,7 +28,7 @@ API에 원격 접속하는 방법은 [Controlling API Access doc](/docs/referenc ## API 변경 -경험에 따르면, 성공적인 시스템은 새로운 유스케이스의 등장과 기존의 유스케이스의 변경에 맞춰 성장하고 변경될 필요가 있다. 그래서, 쿠버네티스 API가 지속적으로 변경되고 성장하기를 바란다. 그러나, 일정 기간 동안은 현존하는 클라이언트와의 호환성을 깨지 않으려고 한다. 일반적으로, 새로운 API 리소스와 새로운 리소스 필드가 주기적으로 추가될 것이다. 리소스나 필드를 없애는 일은 다음의 [API deprecation policy](/docs/reference/using-api/deprecation-policy/)를 따른다. +경험에 따르면, 성공적인 시스템은 새로운 유스케이스의 등장과 기존 유스케이스의 변경에 맞춰 성장하고 변경될 필요가 있다. 그래서, 쿠버네티스 API가 지속적으로 변경되고 성장하기를 바란다. 그러나, 일정 기간 동안은 현재의 클라이언트와의 호환성을 깨지 않으려고 한다. 일반적으로, 새로운 API 리소스와 새로운 리소스 필드가 주기적으로 추가될 것이다. 리소스나 필드를 없애는 일은 다음의 [API deprecation policy](/docs/reference/using-api/deprecation-policy/)를 따른다. 호환되는 변경에 어떤 내용이 포함되는지, 어떻게 API를 변경하는지에 대한 자세한 내용은 [API change document](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md)에 있다. @@ -45,7 +45,7 @@ Accept | `application/json`, `application/com.github.proto-openapi.spec.v2@v1.0+ Accept-Encoding | `gzip` (이 헤더를 전달하지 않아도 됨) 1.14 이전 버전에서 형식이 구분된 엔드포인트(`/swagger.json`, `/swagger-2.0.0.json`, `/swagger-2.0.0.pb-v1`, `/swagger-2.0.0.pb-v1.gz`)는 OpenAPI 스펙을 다른 포맷으로 제공한다. -이러한 엔드포인트는 사용 중단되었으며, 쿠버네티스 1.14에서 제거됬다. +이러한 엔드포인트는 사용이 중단되었으며, 쿠버네티스 1.14에서 제거되었다. **OpenAPI 규격을 조회하는 예제** @@ -59,7 +59,7 @@ GET /swagger-2.0.0.pb-v1.gz | GET /openapi/v2 **Accept**: application/com.github 1.14 이전 버전에서 쿠버네티스 apiserver는 `/swaggerapi`에서 [Swagger v1.2](http://swagger.io/) 쿠버네티스 API 스펙을 검색하는데 사용할 수 있는 API도 제공한다. -이러한 엔드포인트는 사용 중단되었으며, 쿠버네티스 1.14에서 제거되었다. +이러한 엔드포인트는 사용이 중단되었으며, 쿠버네티스 1.14에서 제거되었다. ## API 버전 규칙 @@ -88,7 +88,7 @@ API 버전이 다른 경우는 안정성이나 기술 지원의 수준이 다르 - 코드가 잘 테스트되었다. 이 기능을 활성화 시켜도 안전하다. 기본적으로 활성화되어 있다. - 구체적인 내용이 바뀔 수는 있지만, 전반적인 기능에 대한 기술 지원이 중단되지 않는다. - 오브젝트에 대한 스키마나 문법이 다음 베타 또는 안정화 릴리스에서 호환되지 않는 방식으로 바뀔 수도 있다. 이런 경우, - 다음 버전으로 이관할 수 있는 가이드를 제공할 것이다. + 다음 버전으로 이관할 수 있는 가이드를 제공할 것이다. 이 때 API 오브젝트의 삭제, 편집 또는 재생성이 필요할 수도 있다. 편집 절차는 좀 생각해볼 필요가 있다. 이 기능에 의존하고 있는 애플리케이션은 다운타임이 필요할 수도 있다. - 다음 릴리스에서 호환되지 않을 수도 있으므로 사업적으로 중요하지 않은 용도로만 사용하기를 권장한다. 복수의 클러스터를 가지고 있어서 독립적으로 업그레이드할 수 있다면 이런 제약에서 안심이 될 수도 있겠다. @@ -119,21 +119,22 @@ API 그룹은 REST 경로와 직렬화된 객체의 `apiVersion` 필드에 명 만들 수 있다. -## API 그룹 활성화 시키기 +## API 그룹 활성화 또는 비활성화하기 특정 리소스와 API 그룹은 기본적으로 활성화되어 있다. 이들은 apiserver에서 `--runtime-config`를 설정해서 활성화하거나 비활성화 시킬 수 있다. `--runtime-config`는 쉼표로 분리된 값을 허용한다. 예를 들어서 batch/v1을 비활성화 시키려면 `--runtime-config=batch/v1=false`와 같이 설정하고, batch/v2alpha1을 활성화 시키려면 `--runtime-config=batch/v2alpha1`을 설정한다. 이 플래그는 apiserver의 런타임 설정에 쉼표로 분리된 키=값 쌍의 집합을 허용한다. -중요: 그룹이나 리소스를 활성화 또는 비활성화 시키기 위해서는 apiserver와 controller-manager를 재시작해서 -`--runtime-config` 변경을 반영시켜야 한다. +{{< note >}}그룹이나 리소스를 활성화 또는 비활성화 시키기 위해서는 apiserver와 controller-manager를 재시작해서 +`--runtime-config` 변경을 반영시켜야 한다. {{< /note >}} -## 그룹 내 리소스 활성화 시키기 +## extensions/v1beta1 그룹 내 특정 리소스 활성화하기 -데몬셋, 디플로이먼트, HorizontalPodAutoscaler, 인그레스, 잡 및 레플리카셋이 기본적으로 활성화되어 있다. -다른 확장 리소스는 apiserver의 `--runtime-config`를 설정해서 활성화 시킬 수 있다. -`--runtime-config`는 쉼표로 분리된 값을 허용한다. 예를 들어 디플로이먼트와 인그레스를 비활성화 시키려면, -`--runtime-config=extensions/v1beta1/deployments=false,extensions/v1beta1/ingresses=false`와 같이 설정한다. +데몬셋, 디플로이먼트, 스테이트풀셋, 네트워크폴리시, 파드시큐리티폴리시 그리고 레플리카셋은 `extensions/v1beta1` API 그룹에서 기본적으로 비활성화되어있다. +예시: 디플로이먼트와 데몬셋의 활성화 설정은 +`--runtime-config=extensions/v1beta1/deployments=true,extensions/v1beta1/daemonsets=true` 를 입력한다. + +{{< note >}}개별 리소스의 활성화/비활성화는 레거시 문제로 `extensions/v1beta1` API 그룹에서만 지원된다. {{< /note >}} {{% /capture %}} diff --git a/content/ko/docs/concepts/overview/what-is-kubernetes.md b/content/ko/docs/concepts/overview/what-is-kubernetes.md index 3402d68c6c7a1..14aaf5b43a374 100644 --- a/content/ko/docs/concepts/overview/what-is-kubernetes.md +++ b/content/ko/docs/concepts/overview/what-is-kubernetes.md @@ -1,5 +1,7 @@ --- title: 쿠버네티스란 무엇인가 +description: > + 쿠버네티스는 컨테이너화된 워크로드와 서비스를 관리하기 위한 이식할 수 있고, 확장 가능한 오픈소스 플랫폼으로, 선언적 구성과 자동화를 모두 지원한다. 쿠버네티스는 크고 빠르게 성장하는 생태계를 가지고 있다. 쿠버네티스 서비스, 지원 그리고 도구들은 광범위하게 제공된다. content_template: templates/concept weight: 10 card: @@ -14,9 +16,10 @@ card: {{% capture body %}} 쿠버네티스는 컨테이너화된 워크로드와 서비스를 관리하기 위한 이식성이 있고, 확장가능한 오픈소스 플랫폼이다. 쿠버네티스는 선언적 구성과 자동화를 모두 용이하게 해준다. 쿠버네티스는 크고, 빠르게 성장하는 생태계를 가지고 있다. 쿠버네티스 서비스, 기술 지원 및 도구는 어디서나 쉽게 이용할 수 있다. -쿠버네티스란 명칭은 키잡이(helmsman)이나 파일럿을 뜻하는 그리스어에서 유래했다. 구글이 2014년에 쿠버네티스 프로젝트를 오픈소스화했다. 쿠버네티스는 [구글의 15여년에 걸친 대규모 상용 워크로드 운영 경험](https://ai.google/research/pubs/pub43438)을 기반으로 만들어졌으며 커뮤니티의 최고의 아이디어와 적용 사례가 결합되었다. +쿠버네티스란 명칭은 키잡이(helmsman)나 파일럿을 뜻하는 그리스어에서 유래했다. 구글이 2014년에 쿠버네티스 프로젝트를 오픈소스화했다. 쿠버네티스는 프로덕션 워크로드를 대규모로 운영하는 [15년 이상의 구글 경험](/blog/2015/04/borg-predecessor-to-kubernetes/)과 커뮤니티의 최고의 아이디어와 적용 사례가 결합되어 있다. ## 여정 돌아보기 + 시간이 지나면서 쿠버네티스가 왜 유용하게 되었는지 살펴보자. ![배포 혁명](/images/docs/Container_Evolution.svg) @@ -26,9 +29,9 @@ card: **가상화된 배포 시대:** 그 해결책으로 가상화가 도입되었다. 이는 단일 물리 서버의 CPU에서 여러 가상 시스템 (VM)을 실행할 수 있게 한다. 가상화를 사용하면 VM간에 애플리케이션을 격리하고 애플리케이션의 정보를 다른 애플리케이션에서 자유롭게 액세스 할 수 없으므로, 일정 수준의 보안성을 제공할 수 있다. -가상화를 사용하면 물리 서버에서 리소스를 보다 효율적으로 활용할 수 있으며, 쉽게 애플리케이션을 추가하거나 업데이트할 수 있고 하드웨어 비용을 절감할 수 있어 더 나은 확장성을 제공한다. 가상화를 통해 일련의 물리 리소스를 폐기가능한(disposable) 가상 머신으로 구성된 클러스터로 만들 수 있다. +가상화를 사용하면 물리 서버에서 리소스를 보다 효율적으로 활용할 수 있으며, 쉽게 애플리케이션을 추가하거나 업데이트할 수 있고 하드웨어 비용을 절감할 수 있어 더 나은 확장성을 제공한다. 가상화를 통해 일련의 물리 리소스를 폐기 가능한(disposable) 가상 머신으로 구성된 클러스터로 만들 수 있다. -각 VM은 가상화된 하드웨어 상에서 자체 운영체제를 포함한 모든 구성 요소를 실행하는 전체 시스템이다. +각 VM은 가상화된 하드웨어 상에서 자체 운영체제를 포함한 모든 구성 요소를 실행하는 하나의 완전한 머신이다. **컨테이너 개발 시대:** 컨테이너는 VM과 유사하지만 격리 속성을 완화하여 애플리케이션 간에 운영체제(OS)를 공유한다. 그러므로 컨테이너는 가볍다고 여겨진다. VM과 마찬가지로 컨테이너에는 자체 파일 시스템, CPU, 메모리, 프로세스 공간 등이 있다. 기본 인프라와의 종속성을 끊었기 때문에, 클라우드나 OS 배포본에 모두 이식할 수 있다. @@ -36,16 +39,16 @@ card: * 기민한 애플리케이션 생성과 배포: VM 이미지를 사용하는 것에 비해 컨테이너 이미지 생성이 보다 쉽고 효율적임. * 지속적인 개발, 통합 및 배포: 안정적이고 주기적으로 컨테이너 이미지를 빌드해서 배포할 수 있고 (이미지의 불변성 덕에) 빠르고 쉽게 롤백할 수 있다. -* 개발과 운영의 관심사 분리: 배포 시점이 아닌 빌드/릴리스 시점에 애플리케이션 컨테이너 이미지를 만들기 때문에, 애플리케이션이 인프라스트럭처에서 디커플된다. +* 개발과 운영의 관심사 분리: 배포 시점이 아닌 빌드/릴리스 시점에 애플리케이션 컨테이너 이미지를 만들기 때문에, 애플리케이션이 인프라스트럭처에서 분리된다. * 가시성은 OS 수준의 정보와 메트릭에 머무르지 않고, 애플리케이션의 헬스와 그 밖의 시그널을 볼 수 있다. * 개발, 테스팅 및 운영 환경에 걸친 일관성: 랩탑에서도 클라우드에서와 동일하게 구동된다. -* 클라우드 및 OS 배포판 간 이식성: Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine 및 다른 어디에서든 구동된다. -* 애플리케이션 중심 관리: 가상 하드웨어의 OS에서 애플리케이션을 구동하는 수준에서 OS의 논리적인 자원을 사용하여 애플리케이션을 구동하는 수준으로 추상화 수준이 높아진다. +* 클라우드 및 OS 배포판 간 이식성: Ubuntu, RHEL, CoreOS, 온-프레미스, 주요 퍼블릭 클라우드와 어디에서든 구동된다. +* 애플리케이션 중심 관리: 가상 하드웨어 상에서 OS를 실행하는 수준에서 논리적인 리소스를 사용하는 OS 상에서 애플리케이션을 실행하는 수준으로 추상화 수준이 높아진다. * 느슨하게 커플되고, 분산되고, 유연하며, 자유로운 마이크로서비스: 애플리케이션은 단일 목적의 머신에서 모놀리식 스택으로 구동되지 않고 보다 작고 독립적인 단위로 쪼개져서 동적으로 배포되고 관리될 수 있다. -* 자원 격리: 애플리케이션 성능을 예측할 수 있다. -* 자원 사용량: 고효율 고집적. +* 리소스 격리: 애플리케이션 성능을 예측할 수 있다. +* 자원 사용량: 리소스 사용량: 고효율 고집적. -## 쿠버네티스가 왜 필요하고 무엇을 할 수 있나 +## 쿠버네티스가 왜 필요하고 무엇을 할 수 있나 {#why-you-need-kubernetes-and-what-can-it-do} 컨테이너는 애플리케이션을 포장하고 실행하는 좋은 방법이다. 프로덕션 환경에서는 애플리케이션을 실행하는 컨테이너를 관리하고 가동 중지 시간이 없는지 확인해야한다. 예를 들어 컨테이너가 다운되면 다른 컨테이너를 다시 시작해야한다. 이 문제를 시스템에 의해 처리한다면 더 쉽지 않을까? @@ -64,11 +67,11 @@ card: * **자동화된 복구(self-healing)** 쿠버네티스는 실패한 컨테이너를 다시 시작하고, 컨테이너를 교체하며, '사용자 정의 상태 검사'에 응답하지 않는 컨테이너를 죽이고, 서비스 준비가 끝날 때까지 그러한 과정을 클라이언트에 보여주지 않는다. * **시크릿과 구성 관리** -쿠버네티스를 사용하면 암호, OAuth 토큰 및 SSH 키와 같은 중요한 정보를 저장하고 관리 할 수 있다. 컨테이너 이미지를 재구성하지 않고 스택 구성에 비밀을 노출하지 않고도 비밀 및 애플리케이션 구성을 배포 및 업데이트 할 수 있다. +쿠버네티스를 사용하면 암호, OAuth 토큰 및 SSH 키와 같은 중요한 정보를 저장하고 관리 할 수 있다. 컨테이너 이미지를 재구성하지 않고 스택 구성에 시크릿을 노출하지 않고도 시크릿 및 애플리케이션 구성을 배포 및 업데이트 할 수 있다. ## 쿠버네티스가 아닌 것 -쿠버네티스는 전통적인, 모든 것이 포함된 Platform as a Service(PaaS)가 아니다. 쿠버네티스는 하드웨어 수준보다는 컨테이너 수준에서 운영되기 때문에, PaaS가 일반적으로 제공하는 배포, 스케일링, 로드 밸런싱, 로깅 및 모니터링과 같은 기능에서 공통점이 있기도 하다. 하지만, 쿠버네티스는 모놀리식(monolithic)하지 않아서, 이런 기본 솔루션이 선택적이며 추가나 제거가 용이하다. 쿠버네티스는 개발자 플랫폼을 만드는 구성 요소를 제공하지만, 필요한 경우 사용자의 선택권과 유연성을 지켜준다. +쿠버네티스는 전통적인, 모든 것이 포함된 Platform as a Service(PaaS)가 아니다. 쿠버네티스는 하드웨어 수준보다는 컨테이너 수준에서 운영되기 때문에, PaaS가 일반적으로 제공하는 배포, 스케일링, 로드 밸런싱, 로깅 및 모니터링과 같은 기능에서 공통점이 있기도 하다. 하지만, 쿠버네티스는 모놀리식(monolithic)이 아니어서, 이런 기본 솔루션이 선택적이며 추가나 제거가 용이하다. 쿠버네티스는 개발자 플랫폼을 만드는 구성 요소를 제공하지만, 필요한 경우 사용자의 선택권과 유연성을 지켜준다. 쿠버네티스는: @@ -78,7 +81,7 @@ card: * 로깅, 모니터링 또는 경보 솔루션을 포함하지 않는다. 개념 증명을 위한 일부 통합이나, 메트릭을 수집하고 노출하는 메커니즘을 제공한다. * 기본 설정 언어/시스템(예, Jsonnet)을 제공하거나 요구하지 않는다. 선언적 명세의 임의적인 형식을 목적으로 하는 선언적 API를 제공한다. * 포괄적인 머신 설정, 유지보수, 관리, 자동 복구 시스템을 제공하거나 채택하지 않는다. -* 추가로, 쿠버네티스는 단순한 오케스트레이션 시스템이 아니다. 사실, 쿠버네티스는 오케스트레이션의 필요성을 없애준다. 오케스트레이션의 기술적인 정의는 A를 먼저 한 다음, B를 하고, C를 하는 것과 같이 정의된 워크플로우를 수행하는 것이다. 반면에, 쿠버네티스는 독립적이고 조합 가능한 제어 프로세스들로 구성되어 있다. 이 프로세스는 지속적으로 현재 상태를 입력받은 의도된 상태로 나아가도록 한다. A에서 C로 어떻게 갔는지는 상관이 없다. 중앙화된 제어도 필요치 않다. 이로써 시스템이 보다 더 사용하기 쉬워지고, 강력해지며, 견고하고, 회복력을 갖추게 되며, 확장 가능해진다. +* 추가로, 쿠버네티스는 단순한 오케스트레이션 시스템이 아니다. 사실, 쿠버네티스는 오케스트레이션의 필요성을 없애준다. 오케스트레이션의 기술적인 정의는 A를 먼저 한 다음, B를 하고, C를 하는 것과 같이 정의된 워크플로우를 수행하는 것이다. 반면에, 쿠버네티스는 독립적이고 조합 가능한 제어 프로세스들로 구성되어 있다. 이 프로세스는 지속적으로 현재 상태를 입력받은 의도한 상태로 나아가도록 한다. A에서 C로 어떻게 갔는지는 상관이 없다. 중앙화된 제어도 필요치 않다. 이로써 시스템이 보다 더 사용하기 쉬워지고, 강력해지며, 견고하고, 회복력을 갖추게 되며, 확장 가능해진다. {{% /capture %}} diff --git a/content/ko/docs/concepts/overview/working-with-objects/field-selectors.md b/content/ko/docs/concepts/overview/working-with-objects/field-selectors.md index 3e7868c269cb9..06326befa8ef5 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/field-selectors.md +++ b/content/ko/docs/concepts/overview/working-with-objects/field-selectors.md @@ -3,7 +3,7 @@ title: 필드 셀렉터 weight: 60 --- -_필드 셀렉터_ 는 한 개 이상의 리소스 필드 값에 따라 [쿠버네티스 리소스를 선택](/docs/concepts/overview/working-with-objects/kubernetes-objects)하기 위해 사용된다. 필드 셀렉터 쿼리의 예시는 다음과 같다. +_필드 셀렉터_ 는 한 개 이상의 리소스 필드 값에 따라 [쿠버네티스 리소스를 선택](/ko/docs/concepts/overview/working-with-objects/kubernetes-objects/)하기 위해 사용된다. 필드 셀렉터 쿼리의 예시는 다음과 같다. * `metadata.name=my-service` * `metadata.namespace!=default` @@ -45,7 +45,7 @@ kubectl get services --all-namespaces --field-selector metadata.namespace!=defa ## 연계되는 셀렉터 -[레이블](/docs/concepts/overview/working-with-objects/labels)을 비롯한 다른 셀렉터처럼, 쉼표로 구분되는 목록을 통해 필드 셀렉터를 연계해서 사용할 수 있다. 다음의 `kubectl` 커맨드는 `status.phase` 필드가 `Running` 이 아니고, `spec.restartPolicy` 필드가 `Always` 인 모든 파드를 선택한다. +[레이블](/ko/docs/concepts/overview/working-with-objects/labels)을 비롯한 다른 셀렉터처럼, 쉼표로 구분되는 목록을 통해 필드 셀렉터를 연계해서 사용할 수 있다. 다음의 `kubectl` 커맨드는 `status.phase` 필드가 `Running` 이 아니고, `spec.restartPolicy` 필드가 `Always` 인 모든 파드를 선택한다. ```shell kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always diff --git a/content/ko/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/ko/docs/concepts/overview/working-with-objects/kubernetes-objects.md index a3f2a02e875ba..c62ae41b87949 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/ko/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -20,29 +20,45 @@ card: * 그 애플리케이션이 이용할 수 있는 리소스 * 그 애플리케이션이 어떻게 재구동 정책, 업그레이드, 그리고 내고장성과 같은 것에 동작해야 하는지에 대한 정책 -쿠버네티스 오브젝트는 하나의 "의도를 담은 레코드" 이다. 오브젝트를 생성하게 되면, 쿠버네티스 시스템은 그 오브젝트 생성을 보장하기 위해 지속적으로 작동할 것이다. 오브젝트를 생성함으로써, 여러분이 클러스터의 워크로드를 어떤 형태로 보이고자 하는지에 대해 효과적으로 쿠버네티스 시스템에 전한다. 이것이 바로 여러분의 클러스터에 대해 *의도한 상태* 가 된다. +쿠버네티스 오브젝트는 하나의 "의도를 담은 레코드"이다. 오브젝트를 생성하게 되면, 쿠버네티스 시스템은 그 오브젝트 생성을 보장하기 위해 지속적으로 작동할 것이다. 오브젝트를 생성함으로써, 여러분이 클러스터의 워크로드를 어떤 형태로 보이고자 하는지에 대해 효과적으로 쿠버네티스 시스템에 전한다. 이것이 바로 여러분의 클러스터에 대해 *의도한 상태* 가 된다. 생성이든, 수정이든, 또는 삭제든 쿠버네티스 오브젝트를 동작시키려면, [쿠버네티스 API](/ko/docs/concepts/overview/kubernetes-api/)를 이용해야 한다. 예를 들어, `kubectl` 커맨드-라인 인터페이스를 이용할 때, CLI는 여러분 대신 필요한 쿠버네티스 API를 호출해 준다. 또한, 여러분은 [클라이언트 라이브러리](/ko/docs/reference/using-api/client-libraries/) 중 하나를 이용하여 여러분만의 프로그램에서 쿠버네티스 API를 직접 이용할 수도 있다. -### 오브젝트 스펙(spec)과 상태(status) +### 오브젝트 명세(spec)와 상태(status) -모든 쿠버네티스 오브젝트는 오브젝트의 구성을 결정해주는 두 개의 중첩된 오브젝트 필드를 포함하는데 오브젝트 *spec* 과 오브젝트 *status* 가 그것이다. 필히 제공되어야만 하는 *spec* 은, 여러분이 오브젝트가 가졌으면 하고 원하는 특징, 즉 의도한 상태를 기술한다. *status* 는 오브젝트의 *실제 상태* 를 기술하고, 쿠버네티스 시스템에 의해 제공되고 업데이트 된다. 주어진 임의의 시간에, 쿠버네티스 컨트롤 플레인은 오브젝트의 실제 상태를 여러분이 제시한 의도한 상태에 일치시키기 위해 능동적으로 관리한다. +거의 모든 쿠버네티스 오브젝트는 오브젝트의 구성을 결정해주는 +두 개의 중첩된 오브젝트 필드를 포함하는데 오브젝트 *`spec`* 과 오브젝트 *`status`* 이다. +`spec`을 가진 오브젝트는 오브젝트를 생성할 때 리소스에 +원하는 특징(_의도한 상태_)에 대한 설명을 +제공해서 설정한다. +`status`는 오브젝트의 _현재 상태_ 를 기술하고, 쿠버네티스 +컴포넌트에 의해 제공되고 업데이트 된다. 쿠버네티스 +{{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}은 모든 오브젝트의 +실제 상태를 사용자가 의도한 상태와 일치시키기 위해 끊임없이 그리고 +능동적으로 관리한다. -예를 들어, 쿠버네티스 디플로이먼트는 클러스터에서 동작하는 애플리케이션을 표현해 줄 수 있는 오브젝트이다. 디플로이먼트를 생성할 때, 디플로이먼트 spec에 3개의 애플리케이션 레플리카가 동작되도록 설정할 수 있다. 쿠버네티스 시스템은 그 디플로이먼트 spec을 읽어 spec에 일치되도록 상태를 업데이트하여 3개의 의도한 애플리케이션 인스턴스를 구동시킨다. 만약, 그 인스턴스들 중 어느 하나가 (상태 변경에) 실패가 난다면, 쿠버네티스 시스템은 보정을 통해, 이 경우에는 인스턴스 대체를 착수하여, spec과 status 간의 차이에 대응한다. +예를 들어, 쿠버네티스 디플로이먼트는 클러스터에서 동작하는 애플리케이션을 +표현해줄 수 있는 오브젝트이다. 디플로이먼트를 생성할 때, 디플로이먼트 +spec에 3개의 애플리케이션 레플리카가 동작되도록 +설정할 수 있다. 쿠버네티스 시스템은 그 디플로이먼트 spec을 읽어 +spec에 일치되도록 상태를 업데이트하여 3개의 의도한 +애플리케이션 인스턴스를 구동시킨다. 만약, 그 인스턴스들 중 어느 하나가 +(상태 변경에) 실패한다면, 쿠버네티스 시스템은 보정(이 경우에는 대체 인스턴스를 시작하여)을 통해 +spec과 status 간의 차이에 대응한다. -오브젝트 spec, staus, 그리고 metadata에 대한 추가 정보는, [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md) 를 참조한다. +오브젝트 명세, 상태, 그리고 메타데이터에 대한 추가 정보는, [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md) 를 참조한다. ### 쿠버네티스 오브젝트 기술하기 -쿠버네티스에서 오브젝트를 생성할 때, (이름과 같은)오브젝트에 대한 기본적인 정보와 더불어, 의도한 상태를 기술한 오브젝트 spec을 제시해 줘야만 한다. 오브젝트를 생성하기 위해(직접이든 또는 `kubectl`을 통해서든) 쿠버네티스 API를 이용할 때, API 요청은 요청 내용 안에 JSON 형식으로 정보를 포함시켜 줘야만 한다. **가장 자주, .yaml 파일로 `kubectl`에 정보를 제공해준다.** `kubectl` 은 API 요청이 이루어질 때, JSON 형식으로 정보를 변환시켜 준다. +쿠버네티스에서 오브젝트를 생성할 때, (이름과 같은)오브젝트에 대한 기본적인 정보와 더불어, 의도한 상태를 기술한 오브젝트 spec을 제시해 줘야만 한다. 오브젝트를 생성하기 위해(직접이든 또는 `kubectl`을 통해서든) 쿠버네티스 API를 이용할 때, API 요청은 요청 내용 안에 JSON 형식으로 정보를 포함시켜 줘야만 한다. **대부분의 경우 정보를 .yaml 파일로 `kubectl`에 제공한다.** `kubectl`은 API 요청이 이루어질 때, JSON 형식으로 정보를 변환시켜 준다. 여기 쿠버네티스 디플로이먼트를 위한 요청 필드와 오브젝트 spec을 보여주는 `.yaml` 파일 예시가 있다. {{< codenew file="application/deployment.yaml" >}} 위 예시와 같이 .yaml 파일을 이용하여 디플로이먼트를 생성하기 위한 하나의 방식으로는 -`kubectl` 커맨드-라인 인터페이스에 인자값으로 `.yaml` 파일를 건네 +`kubectl` 커맨드-라인 인터페이스에 인자값으로 `.yaml` 파일을 건네 [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply) 커맨드를 이용하는 것이다. 다음 예시와 같다. ```shell @@ -61,7 +77,7 @@ deployment.apps/nginx-deployment created * `apiVersion` - 이 오브젝트를 생성하기 위해 사용하고 있는 쿠버네티스 API 버전이 어떤 것인지 * `kind` - 어떤 종류의 오브젝트를 생성하고자 하는지 -* `metadata` - `이름` 문자열, `UID`, 그리고 선택적인 `네임스페이스` 를 포함하여 오브젝트를 유일하게 구분지어 줄 데이터 +* `metadata` - `이름` 문자열, `UID`, 그리고 선택적인 `네임스페이스`를 포함하여 오브젝트를 유일하게 구분지어 줄 데이터 * `spec` - 오브젝트에 대해 어떤 상태를 의도하는지 오브젝트 `spec`에 대한 정확한 포맷은 모든 쿠버네티스 오브젝트마다 다르고, 그 오브젝트 특유의 중첩된 필드를 포함한다. [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) 는 쿠버네티스를 이용하여 생성할 수 있는 오브젝트에 대한 모든 spec 포맷을 살펴볼 수 있도록 해준다. @@ -78,4 +94,3 @@ deployment.apps/nginx-deployment created * 쿠버네티스의 [컨트롤러](/ko/docs/concepts/architecture/controller/)에 대해 배운다. {{% /capture %}} - diff --git a/content/ko/docs/concepts/overview/working-with-objects/labels.md b/content/ko/docs/concepts/overview/working-with-objects/labels.md index ff9a93d5d7b41..a76972a8744ae 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/labels.md +++ b/content/ko/docs/concepts/overview/working-with-objects/labels.md @@ -27,11 +27,11 @@ _레이블_ 은 파드와 같은 오브젝트에 첨부된 키와 값의 쌍이 {{% capture body %}} -## 사용동기 +## 사용 동기 레이블을 이용하면 사용자가 느슨하게 결합한 방식으로 조직 구조와 시스템 오브젝트를 매핑할 수 있으며, 클라이언트에 매핑 정보를 저장할 필요가 없다. -서비스 배포와 배치 프로세싱 파이프라인은 흔히 다차원의 엔터티들이다(예: 다중파티션 또는 배포, 다중 릴리즈 트랙, 다중 계층, 계층속 여러 마이크로 서비스들). 관리에는 크로스-커팅 작업이 필요한 경우가 많은데 이 작업은 사용자보다는 인프라에 의해 결정된 엄격한 계층 표현인 캡슐화를 깨트린다. +서비스 배포와 배치 프로세싱 파이프라인은 흔히 다차원의 엔티티들이다(예: 다중 파티션 또는 배포, 다중 릴리즈 트랙, 다중 계층, 계층 속 여러 마이크로 서비스들). 관리에는 크로스-커팅 작업이 필요한 경우가 많은데 이 작업은 사용자보다는 인프라에 의해 결정된 엄격한 계층 표현인 캡슐화를 깨트린다. 레이블 예시: @@ -41,17 +41,17 @@ _레이블_ 은 파드와 같은 오브젝트에 첨부된 키와 값의 쌍이 * `"partition" : "customerA"`, `"partition" : "customerB"` * `"track" : "daily"`, `"track" : "weekly"` -레이블 예시는 일반적으로 사용하는 경우에 해당한다. 당신의 규약에 따라 자유롭게 개발할 수 있다. 오브젝트에 붙여진 레이블 키는 고유해야한다는 것을 기억해야한다. +레이블 예시는 일반적으로 사용하는 상황에 해당한다. 당신의 규약에 따라 자유롭게 개발할 수 있다. 오브젝트에 붙여진 레이블 키는 고유해야 한다는 것을 기억해야 한다. ## 구문과 캐릭터 셋 -_레이블_ 은 키와 값의 쌍이다. 유효한 레이블 키에는 슬래시(`/`)로 구분되는 선택한 접두사와 이름이라는 2개의 세그먼트가 있다. 이름 세그먼트는 63자 미만으로 시작과 끝은 알파벳과 숫자(`[a-z0-9A-Z]`)이며, 대시(`-`), 밑줄(`_`), 점(`.`)과 함께 사용할 수 있다. 접두사는 선택이다. 만약 접두사를 지정한 경우 접두사는 DNS의 하위 도메인으로 해야하며, 점(`.`)과, 전체 253자 이하, 슬래시(`/`)로 구분되는 DNS 레이블이다. +_레이블_ 은 키와 값의 쌍이다. 유효한 레이블 키에는 슬래시(`/`)로 구분되는 선택한 접두사와 이름이라는 2개의 세그먼트가 있다. 이름 세그먼트는 63자 미만으로 시작과 끝은 알파벳과 숫자(`[a-z0-9A-Z]`)이며, 대시(`-`), 밑줄(`_`), 점(`.`)과 함께 사용할 수 있다. 접두사는 선택이다. 만약 접두사를 지정한 경우 접두사는 DNS의 하위 도메인으로 해야 하며, 점(`.`)과 전체 253자 이하, 슬래시(`/`)로 구분되는 DNS 레이블이다. -접두사를 생략하면 키 레이블은 개인용으로 간주한다. 최종 사용자의 오브젝트에 자동화된 시스템 구성 요소(예: `kube-scheduler`, `kube-controller-manager`, `kube-apiserver`, `kubectl` 또는 다른 타사의 자동화 구성 요소)의 접두사를 지정해야 한다. +접두사를 생략하면 키 레이블은 개인용으로 간주한다. 최종 사용자의 오브젝트에 자동화된 시스템 컴포넌트(예: `kube-scheduler`, `kube-controller-manager`, `kube-apiserver`, `kubectl` 또는 다른 타사의 자동화 구성 요소)의 접두사를 지정해야 한다. -`kubernetes.io/`와 `k8s.io/` 접두사는 쿠버네티스의 핵심 구성요소로 예약되어있다. +`kubernetes.io/`와 `k8s.io/` 접두사는 쿠버네티스의 핵심 컴포넌트로 예약되어있다. -유효한 레이블 값은 63자 미만 또는 공백이며 시작과 끝은 알파벳과 숫자(`[a-z0-9A-Z]`)이며, 대시(`-`), 밑줄(`_`), 점(`.`)과 함께 사용할 수 있다. +유효한 레이블 값은 63자 미만 또는 공백이며 시작과 끝은 알파벳과 숫자(`[a-z0-9A-Z]`)이며, 대시(`-`), 밑줄(`_`), 점(`.`)과 함께 사용할 수 있다. 다음의 예시는 파드에 `environment: production` 과 `app: nginx` 2개의 레이블이 있는 구성 파일이다. @@ -83,10 +83,10 @@ API는 현재 _일치성 기준_ 과 _집합성 기준_ 이라는 두 종류의 레이블 셀렉터는 쉼표로 구분된 다양한 _요구사항_ 에 따라 만들 수 있다. 다양한 요구사항이 있는 경우 쉼표 기호가 AND(`&&`) 연산자로 구분되는 역할을 하도록 해야 한다. 비어있거나 지정되지 않은 셀렉터는 상황에 따라 달라진다. -셀렉터를 사용하는 API 유형은 유효성과 의미를 문서화 해야 한다. +셀렉터를 사용하는 API 유형은 유효성과 의미를 문서화해야 한다. {{< note >}} -레플리카 셋과 같은 일부 API 유형에서 두 인스턴스의 레이블 셀렉터는 네임스페이스 내에서 겹치지 않아야 한다. 그렇지 않으면 컨트롤러는 상충되는 명령으로 보고, 얼마나 많은 복제본이 필요한지 알 수 없다. +레플리카 셋과 같은 일부 API 유형에서 두 인스턴스의 레이블 셀렉터는 네임스페이스 내에서 겹치지 않아야 한다. 그렇지 않으면 컨트롤러는 상충하는 명령으로 보고, 얼마나 많은 복제본이 필요한지 알 수 없다. {{< /note >}} {{< caution >}} @@ -96,23 +96,21 @@ API는 현재 _일치성 기준_ 과 _집합성 기준_ 이라는 두 종류의 ### _일치성 기준_ 요건 -_일치성 기준_ 또는 _불일치 기준_ 의 요구사항으로 레이블의 키와 값의 필터링을 허용한다. 일치하는 오브젝트는 추가 레이블을 가질 수 있지만 레이블의 명시된 제약 조건을 모두 만족해야 한다. -`=`,`==`,`!=` 이 3가지 연산자만 허용한다. 처음 두 개의 연산자의 _일치성_(그리고 단순히 동의어일 뿐임), 나머지는 _불일치_를 의미한다. 예를 들면, +_일치성 기준_ 또는 _불일치 기준_ 의 요구사항으로 레이블의 키와 값의 필터링을 허용한다. 일치하는 오브젝트는 추가 레이블을 가질 수 있지만, 레이블의 명시된 제약 조건을 모두 만족해야 한다. +`=`,`==`,`!=` 이 3가지 연산자만 허용한다. 처음 두 개의 연산자의 _일치성_(그리고 단순히 동의어일 뿐임), 나머지는 _불일치_ 를 의미한다. 예를 들면, ``` environment = production tier != frontend ``` -전자는 `environment`를 키로 가지는 것과 `production`를 값으로 가지는 모든 리소스를 선택한다. +전자는 `environment`를 키로 가지는 것과 `production`을 값으로 가지는 모든 리소스를 선택한다. 후자는 `tier`를 키로 가지고, 값을 `frontend`를 가지는 리소스를 제외한 모든 리소스를 선택하고, `tier`를 키로 가지며, 값을 공백으로 가지는 모든 리소스를 선택한다. -`environment=production,tier!=frontend` 처럼 쉼표를 통해 한 문장으로 `frontend`를 제외한 `production`을 필터링할 수 있다. +`environment=production,tier!=frontend` 처럼 쉼표를 통해 한 문장으로 `frontend`를 제외한 `production`을 필터링할 수 있다. 균등-기반 레이블의 요건에 대한 하나의 이용 시나리오는 파드가 노드를 선택하는 기준을 지정하는 것이다. 예를 들어, 아래 샘플 파드는 "`accelerator=nvidia-tesla-p100`" 레이블을 가진 노드를 선택한다. - - ```yaml apiVersion: v1 kind: Pod @@ -141,11 +139,11 @@ partition ``` 첫 번째 예시에서 키가 `environment`이고 값이 `production` 또는 `qa`인 모든 리소스를 선택한다. -두 번째 예시에서 키가 `tier`이고 값이 `frontend`와 `backend`를 가지는 리소스를 제외한 모든 리소스와, 키로 `tier`를 가지고 값을 공백으로 가지는 모든 리소스를 선택한다. -세 번째 예시에서 레이블의 값에 상관없이 키가 `partition`를 포함하는 모든 리소스를 선택한다. -네 번째 예시에서 레이블의 값에 상관없이 키가 `partition`를 포함하지 않는 모든 리소스를 선택한다. +두 번째 예시에서 키가 `tier`이고 값이 `frontend`와 `backend`를 가지는 리소스를 제외한 모든 리소스와 키로 `tier`를 가지고 값을 공백으로 가지는 모든 리소스를 선택한다. +세 번째 예시에서 레이블의 값에 상관없이 키가 `partition`을 포함하는 모든 리소스를 선택한다. +네 번째 예시에서 레이블의 값에 상관없이 키가 `partition`을 포함하지 않는 모든 리소스를 선택한다. 마찬가지로 쉼표는 _AND_ 연산자로 작동한다. 따라서 `partition,environment notin (qa)`와 같이 사용하면 값과 상관없이 키가 `partition`인 것과 키가 `environment`이고 값이 `qa`와 다른 리소스를 필터링할 수 있다. -_집합성 기준_ 레이블 셀렉터는 일반적으로 `environment=production` 과 `environment in (production)`를 같은 것으로 본다. 유사하게는 `!=`과 `notin`을 같은 것으로 본다. +_집합성 기준_ 레이블 셀렉터는 일반적으로 `environment=production`과 `environment in (production)`을 같은 것으로 본다. 유사하게는 `!=`과 `notin`을 같은 것으로 본다. _집합성 기준_ 요건은 _일치성 기준_ 요건과 조합해서 사용할 수 있다. 예를 들어 `partition in (customerA, customerB),environment!=qa` @@ -158,7 +156,7 @@ LIST와 WATCH 작업은 쿼리 파라미터를 사용해서 반환되는 오브 * _불일치 기준_ 요건: `?labelSelector=environment%3Dproduction,tier%3Dfrontend` * _집합성 기준_ 요건: `?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29` -두 가지 레이블 셀렉터 스타일은 모두 REST 클라이언트를 통해 선택된 리소스를 확인하거나 목록을 볼 수 있다. 예를 들어, `kubectl`로 `API 서버`를 대상으로 _불일치 기준_으로 하는 셀렉터를 다음과 같이 이용할 수 있다. +두 가지 레이블 셀렉터 스타일은 모두 REST 클라이언트를 통해 선택된 리소스를 확인하거나 목록을 볼 수 있다. 예를 들어, `kubectl`로 `apiserver`를 대상으로 _불일치 기준_ 으로 하는 셀렉터를 다음과 같이 이용할 수 있다. ```shell kubectl get pods -l environment=production,tier=frontend @@ -184,11 +182,11 @@ kubectl get pods -l 'environment,environment notin (frontend)' ### API 오브젝트에서 참조 설정 -[`서비스`](/docs/user-guide/services) 와 [`레플리케이션 컨트롤러`](/ko/docs/concepts/workloads/controllers/replicationcontroller/)와 같은 일부 쿠버네티스 오브젝트는 레이블 셀렉터를 사용해서 [`파드`](/ko/docs/concepts/workloads/pods/pod/)와 같은 다른 리소스 집합을 선택한다. +[`services`](/ko/docs/concepts/services-networking/service/) 와 [`replicationcontrollers`](/ko/docs/concepts/workloads/controllers/replicationcontroller/)와 같은 일부 쿠버네티스 오브젝트는 레이블 셀렉터를 사용해서 [파드](/ko/docs/concepts/workloads/pods/pod/)와 같은 다른 리소스 집합을 선택한다. #### 서비스와 레플리케이션 컨트롤러 -`서비스`에서 지정하는 파드 집합은 레이블 셀렉터로 정의한다. 마찬가지로 `레플리케이션 컨트롤러`가 관리하는 파드의 개체군도 레이블 셀렉터로 정의한다. +`services`에서 지정하는 파드 집합은 레이블 셀렉터로 정의한다. 마찬가지로 `replicationcontrollers`가 관리하는 파드의 개체군도 레이블 셀렉터로 정의한다. 서비스와 레플리케이션 컨트롤러의 레이블 셀렉터는 `json` 또는 `yaml` 파일에 매핑된 _균등-기반_ 요구사항의 셀렉터만 지원한다. @@ -209,7 +207,7 @@ selector: #### 세트-기반 요건을 지원하는 리소스 -[`잡`](/docs/concepts/workloads/controllers/jobs-run-to-completion/), [`디플로이먼트`](/ko/docs/concepts/workloads/controllers/deployment/), [`레플리카셋`](/ko/docs/concepts/workloads/controllers/replicaset/) 그리고 [`데몬셋`](/ko/docs/concepts/workloads/controllers/daemonset/) 같은 새로운 리소스들은 집합성 기준의 요건도 지원한다. +[`Job`](/ko/docs/concepts/workloads/controllers/jobs-run-to-completion/), [`Deployment`](/ko/docs/concepts/workloads/controllers/deployment/), [`ReplicaSet`](/ko/docs/concepts/workloads/controllers/replicaset/) 그리고 [`DaemonSet`](/ko/docs/concepts/workloads/controllers/daemonset/) 같은 새로운 리소스들은 집합성 기준의 요건도 지원한다. ```yaml selector: @@ -220,7 +218,7 @@ selector: - {key: environment, operator: NotIn, values: [dev]} ``` -`matchLabels`는 `{key,value}`의 쌍과 매칭된다. `matchLabels`에 매칭된 단일 `{key,value}`는 `matchExpressions`의 요소와 같으며 `key` 필드는 "key"로, `operator`는 "In" 그리고 `values`에는 "value"만 나열되어 있다. `matchExpressions`는 파드 셀렉터의 요건 목록이다. 유효한 연산자에는 In, NotIn, Exists 및 DoNotExist가 포함된다. In 및 NotIn은 설정된 값이 있어야 한다. `matchLabels`과 `matchExpressions` 모두 AND로 되어있어 일치하기 위해서는 모든 요건을 만족해야 한다. +`matchLabels`는 `{key,value}`의 쌍과 매칭된다. `matchLabels`에 매칭된 단일 `{key,value}`는 `matchExpressions`의 요소와 같으며 `key` 필드는 "key"로, `operator`는 "In" 그리고 `values`에는 "value"만 나열되어 있다. `matchExpressions`는 파드 셀렉터의 요건 목록이다. 유효한 연산자에는 In, NotIn, Exists 및 DoNotExist가 포함된다. In 및 NotIn은 설정된 값이 있어야 한다. `matchLabels`와 `matchExpressions` 모두 AND로 되어있어 일치하기 위해서는 모든 요건을 만족해야 한다. #### 노드 셋 선택 diff --git a/content/ko/docs/concepts/overview/working-with-objects/names.md b/content/ko/docs/concepts/overview/working-with-objects/names.md index 0ab2681a77022..069c49d9089cf 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/names.md +++ b/content/ko/docs/concepts/overview/working-with-objects/names.md @@ -9,9 +9,9 @@ weight: 20 클러스터의 각 오브젝트는 해당 유형의 리소스에 대하여 고유한 [_이름_](#names) 을 가지고 있다. 또한, 모든 쿠버네티스 오브젝트는 전체 클러스터에 걸쳐 고유한 [_UID_](#uids) 를 가지고 있다. -예를 들어, 이름이 `myapp-1234`인 파드는 동일한 [네임스페이스](/ko/docs/concepts/overview/working-with-objects/namespaces/) 내에서 하나만 가질 수 있지만, 이름이 `myapp-1234`인 파드와 디플로이먼트는 각각 가질 수 있다. +예를 들어, 이름이 `myapp-1234`인 파드는 동일한 [네임스페이스](/ko/docs/concepts/overview/working-with-objects/namespaces/) 내에서 하나만 존재할 수 있지만, 이름이 `myapp-1234`인 파드와 디플로이먼트는 각각 존재할 수 있다. -유일하지 않은 사용자 제공 속성에 대해서, 쿠버네티스는 [레이블](/ko/docs/concepts/overview/working-with-objects/labels/)과 [어노테이션](/ko/docs/concepts/overview/working-with-objects/annotations/)을 제공한다. +유일하지 않은 사용자 제공 속성의 경우 쿠버네티스는 [레이블](/ko/docs/concepts/overview/working-with-objects/labels/)과 [어노테이션](/ko/docs/concepts/overview/working-with-objects/annotations/)을 제공한다. {{% /capture %}} @@ -22,9 +22,9 @@ weight: 20 {{< glossary_definition term_id="name" length="all" >}} -다음은 리소스에 일반적으로 사용되는 세가지 유형의 이름 제한 조건이다. +다음은 리소스에 일반적으로 사용되는 세 가지 유형의 이름 제한 조건이다. -### DNS 서브도메인 이름들 +### DNS 서브도메인 이름 대부분의 리소스 유형에는 [RFC 1123](https://tools.ietf.org/html/rfc1123)에 정의된 대로 DNS 서브도메인 이름으로 사용할 수 있는 이름이 필요하다. @@ -52,7 +52,7 @@ DNS 서브도메인 이름으로 사용할 수 있는 이름이 필요하다. 있어야 한다. 즉 이름이 "." 또는 ".."이 아닐 수 있으며 이름에는 "/" 또는 "%"가 포함될 수 없다. -여기 파드의 이름이 `nginx-demo`라는 매니페스트 예시가 있다. +아래는 파드의 이름이 `nginx-demo`라는 매니페스트 예시이다. ```yaml apiVersion: v1 diff --git a/content/ko/docs/concepts/overview/working-with-objects/namespaces.md b/content/ko/docs/concepts/overview/working-with-objects/namespaces.md index b6108dfc59c06..8308846611be8 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/namespaces.md +++ b/content/ko/docs/concepts/overview/working-with-objects/namespaces.md @@ -6,33 +6,33 @@ weight: 30 {{% capture overview %}} -쿠버네티스는 동일 물리 클러스터를 기반으로 하는 복수의 가상 클러스터를 지원한다. -이들 가상 클러스터를 네임스페이스라고 한다. +쿠버네티스는 동일한 물리 클러스터를 기반으로 하는 여러 가상 클러스터를 지원한다. +이런 가상 클러스터를 네임스페이스라고 한다. {{% /capture %}} {{% capture body %}} -## 복수의 네임스페이스를 사용하는 경우 +## 여러 개의 네임스페이스를 사용하는 경우 -네임스페이스는 복수의 팀이나, 프로젝트에 걸쳐서 많은 사용자가 있는 환경에서 사용하도록 -만들어졌다. 사용자가 거의 없거나, 수 십명 정도가 되는 경우에는, -네임스페이스를 고려할 필요가 전혀 없다. +네임스페이스는 여러 개의 팀이나, 프로젝트에 걸쳐서 많은 사용자가 있는 환경에서 사용하도록 +만들어졌다. 사용자가 거의 없거나, 수 십명 정도가 되는 경우에는 +네임스페이스를 전혀 고려할 필요가 없다. 네임스페이스가 제공하는 기능이 필요할 때 사용하도록 하자. -네임스페이스는 이름의 범위를 제공한다. -리소스의 이름은 네임스페이스 내에서 유일해야하지만, -네임스페이스를 통틀어서 유일할 필요는 없다. +네임스페이스는 이름의 범위를 제공한다. 리소스의 이름은 네임스페이스 내에서 유일해야하지만, +네임스페이스를 통틀어서 유일할 필요는 없다. 네임스페이스는 서로 중첩될 수 없으며, +각 쿠버네티스 리소스는 하나의 네임스페이스에만 있을 수 있다. -네임스페이스는 클러스터 자원을 ([리소스 쿼터](/docs/concepts/policy/resource-quotas/)를 통해) 복수의 사용자 사이에서 나누는 방법이다. +네임스페이스는 클러스터 자원을 ([리소스 쿼터](/docs/concepts/policy/resource-quotas/)를 통해) 여러 사용자 사이에서 나누는 방법이다. -다음 버전의 쿠버네티스에서는, 같은 네임스페이스의 오브젝트는 기본적으로 동일한 접근 제어 정책을 갖게 된다. -네임스페이스는 서로 중첩될 수 없으며, 각 쿠버네티스 리소스는 하나의 네임스페이스에만 있을 수 있다. +이후 버전의 쿠버네티스에서는 같은 네임스페이스의 오브젝트는 기본적으로 +동일한 접근 제어 정책을 갖게 된다. -같은 소프트웨어의 다른 버전과 같이 단지 약간의 차이가 있는 리소스를 분리하기 위해서 -복수의 네임스페이스를 사용할 필요가 있다. 동일한 네임스페이스에 있는 리소스를 -구분하기 위해서는 [레이블](/ko/docs/concepts/overview/working-with-objects/labels/)을 사용한다. +동일한 소프트웨어의 다른 버전과 같이 약간 다른 리소스를 분리하기 위해 +여러 네임스페이스를 사용할 필요는 없다. 동일한 네임스페이스 내에서 리소스를 +구별하기 위해 [레이블](/ko/docs/concepts/overview/working-with-objects/labels/)을 사용한다. ## 네임스페이스 다루기 @@ -41,7 +41,7 @@ weight: 30 ### 네임스페이스 조회 -사용중인 클러스터의 현재 네임스페이스를 나열할 수 있다. +사용 중인 클러스터의 현재 네임스페이스를 나열할 수 있다. ```shell kubectl get namespace @@ -61,7 +61,7 @@ kube-public Active 1d ### 요청에 네임스페이스 설정하기 -네임스페이스를 현재 요청에 설정하기 위해서는, `--namespace` 플래그를 사용한다. +현재 요청에 대한 네임스페이스를 설정하기 위해서 `--namespace` 플래그를 사용한다. 예를 들면, @@ -72,7 +72,7 @@ kubectl get pods --namespace= ### 선호하는 네임스페이스 설정하기 -이후 모든 kubectl 명령에서 사용될 네임스페이스를 컨텍스트에 +이후 모든 kubectl 명령에서 사용하는 네임스페이스를 컨텍스트에 영구적으로 저장할 수 있다. ```shell @@ -83,7 +83,7 @@ kubectl config view --minify | grep namespace: ## 네임스페이스와 DNS -[서비스](/docs/user-guide/services)를 생성하면, 대응되는 +[서비스](/docs/user-guide/services)를 생성하면 해당 [DNS 엔트리](/ko/docs/concepts/services-networking/dns-pod-service/)가 생성된다. 이 엔트리는 `<서비스-이름>.<네임스페이스-이름>.svc.cluster.local`의 형식을 갖는데, 이는 컨테이너가 `<서비스-이름>`만 사용하는 경우, 네임스페이스 내에 국한된 서비스로 연결된다. @@ -94,10 +94,10 @@ kubectl config view --minify | grep namespace: 대부분의 쿠버네티스 리소스(예를 들어, 파드, 서비스, 레플리케이션 컨트롤러 외)는 네임스페이스에 속한다. 하지만 네임스페이스 리소스 자체는 네임스페이스에 속하지 않는다. -그리고 [nodes](/ko/docs/concepts/architecture/nodes/)나 퍼시스턴트 볼륨과 같은 저수준 리소스는 어느 +그리고 [노드](/ko/docs/concepts/architecture/nodes/)나 퍼시스턴트 볼륨과 같은 저수준 리소스는 어느 네임스페이스에도 속하지 않는다. -네임스페이스에 속하지 않는 쿠버네티스 리소스를 조회하기 위해서는, +다음은 네임스페이스에 속하지 않는 쿠버네티스 리소스를 조회하는 방법이다. ```shell # 네임스페이스에 속하는 리소스 diff --git a/content/ko/docs/concepts/overview/working-with-objects/object-management.md b/content/ko/docs/concepts/overview/working-with-objects/object-management.md index 43ab232e4681f..e164852d2e00d 100644 --- a/content/ko/docs/concepts/overview/working-with-objects/object-management.md +++ b/content/ko/docs/concepts/overview/working-with-objects/object-management.md @@ -7,7 +7,7 @@ weight: 15 {{% capture overview %}} `kubectl` 커맨드라인 툴은 쿠버네티스 오브젝트를 생성하고 관리하기 위한 몇 가지 상이한 방법을 지원한다. 이 문서는 여러가지 접근법에 대한 개요을 -제공한다. Kubectl으로 오브젝트 관리하기에 대한 자세한 설명은 +제공한다. Kubectl로 오브젝트 관리하기에 대한 자세한 설명은 [Kubectl 서적](https://kubectl.docs.kubernetes.io)에서 확인한다. {{% /capture %}} @@ -16,7 +16,7 @@ weight: 15 ## 관리 기법 {{< warning >}} -쿠버네티스 오브젝트는 오직 하나의 기법을 사용하여 관리되어야 한다. 동일한 오브젝트에 +쿠버네티스 오브젝트는 하나의 기법만 사용하여 관리해야 한다. 동일한 오브젝트에 대해 혼합하고 일치시키는 기법은 확실하지 않은 동작을 초래하게 된다. {{< /warning >}} @@ -26,10 +26,10 @@ weight: 15 | Imperative object configuration | Individual files | Production projects | 1 | Moderate | | Declarative object configuration | Directories of files | Production projects | 1+ | Highest | -## 명령형 명령어 +## 명령형 커맨드 -명령형 명령어를 사용할 경우, 사용자는 클러스터 내 활성 오브젝트를 대상으로 -직접 동작시킨다. 사용자는 `kubectl` 명령어에 인수 또는 플래그로 작업을 +명령형 커맨드를 사용할 경우, 사용자는 클러스터 내 활성 오브젝트를 대상으로 +직접 동작시킨다. 사용자는 `kubectl` 커맨드에 인수 또는 플래그로 작업을 제공한다. 이것은 클러스터에서 일회성 작업을 개시시키거나 동작시키기 위한 @@ -52,21 +52,21 @@ kubectl create deployment nginx --image nginx ### 트레이드 오프 -오브젝트 구성과 비교한 장점은 +오브젝트 구성에 비해 장점은 다음과 같다. -- 명령어가 익히기에 단순, 용이하고 기억하기 쉽다. -- 명령어가 클러스터에 변경을 주기 위해 오직 단일 과정만이 필요하다. +- 커맨드는 간단해서 배우기 쉽고, 기억하기 쉽다. +- 커맨드는 클러스터를 수정하기 위해 단 하나의 단계만을 필요로 한다. -오브젝트 구성과 비교한 단점은 +오브젝트 구성에 비해 단점은 다음과 같다. -- 명령어가 변경 검토 프로세스와 통합되지 않는다. -- 명령어가 변경에 관한 감사 추적을 제공하지 않는다. -- 명렁어가 활성 동작 중인 경우를 제외하고는 레코드의 소스를 제공하지 않는다. -- 명령어가 새로운 오브젝트 생성을 위한 템플릿을 제공하지 않는다. +- 커맨드는 변경 검토 프로세스와 통합되지 않는다. +- 커맨드는 변경에 관한 감사 추적(audit trail)을 제공하지 않는다. +- 커맨드는 활성 동작 중인 경우를 제외하고는 레코드의 소스를 제공하지 않는다. +- 커맨드는 새로운 오브젝트 생성을 위한 템플릿을 제공하지 않는다. ## 명령형 오브젝트 구성 -명령형 오브젝트 구성에서, kubectl 명령은 작업 (생성, 대체 등), +명령형 오브젝트 구성에서 kubectl 커맨드는 작업(생성, 교체 등), 선택적 플래그, 그리고 최소 하나의 파일 이름을 정의한다. 그 파일은 YAML 또는 JSON 형식으로 오브젝트의 완전한 정의를 포함해야만 한다. @@ -75,12 +75,12 @@ kubectl create deployment nginx --image nginx 참고한다. {{< warning >}} -명령형 `replace` 명령은 기존 spec을 새롭게 제공된 것으로 대체하며, -구성 파일에서 누락된 오브젝트에 대한 모든 변경사항은 없어진다. -이러한 접근은 구성 파일에 대해 독립적으로 spec이 업데이트되는 -형태의 리소스와 함께 사용하면 안된다. -예를 들어, `LoadBalancer` 형태의 서비스는 `externalIPs` 필드를 -클러스터 구성과는 독립적으로 업데이트한다. +명령형 `replace` 커맨드는 기존 spec을 새로 제공된 spec으로 바꾸고 +구성 파일에서 누락된 오브젝트의 모든 변경 사항을 삭제한다. +이 방법은 spec이 구성 파일과는 별개로 업데이트되는 리소스 유형에는 +사용하지 말아야한다. +예를 들어 `LoadBalancer` 유형의 서비스는 클러스터의 구성과 별도로 +`externalIPs` 필드가 업데이트된다. {{< /warning >}} ### 예시 @@ -97,7 +97,7 @@ kubectl create -f nginx.yaml kubectl delete -f nginx.yaml -f redis.yaml ``` -활성 동작하는 구성을 덮어씀으로서 구성 파일에 정의된 오브젝트를 +활성 동작하는 구성을 덮어씀으로써 구성 파일에 정의된 오브젝트를 업데이트한다. ```sh @@ -106,41 +106,42 @@ kubectl replace -f nginx.yaml ### 트레이드 오프 -명령형 명령과 비교한 장점은 +명령형 커맨드에 비해 장점은 다음과 같다. -- 오브젝트 구성이 Git과 같은 소스 컨트롤 시스템에 보관되어 질 수 있다. +- 오브젝트 구성은 Git과 같은 소스 컨트롤 시스템에 보관할 수 있다. - 오브젝트 구성은 푸시와 감사 추적 전에 변경사항을 검토하는 것과 같은 프로세스들과 통합할 수 있다. -- 오브젝트 구성이 새로운 오브젝트 생성을 위한 템플릿을 제공한다. +- 오브젝트 구성은 새로운 오브젝트 생성을 위한 템플릿을 제공한다. -명령형 명령과 비교한 단점은 +명령형 커맨드에 비해 단점은 다음과 같다. -- 오브젝트 구성이 오브젝트 스키마에 대한 기본적인 이해를 필요로 한다. -- 오브젝트 구성이 YAML 파일을 기록하는 추가적인 과정을 필요로 한다. +- 오브젝트 구성은 오브젝트 스키마에 대한 기본적인 이해를 필요로 한다. +- 오브젝트 구성은 YAML 파일을 기록하는 추가적인 과정을 필요로 한다. -선언형 오브젝트 구성과 비교한 장점은 +선언형 오브젝트 구성에 비해 장점은 다음과 같다. -- 명령형 오브젝트 구성의 작용은 보다 간결하고 이해하기에 용이하다. -- 쿠버네티스 버전 1.5 부터, 명령형 오브젝트 구성이 더욱 발달한다. +- 명령형 오브젝트 구성의 동작은 보다 간결하고 이해하기 쉽다. +- 쿠버네티스 버전 1.5 부터는 더 성숙한 명령형 오브젝트 구성을 제공한다. -선언형 오브젝트 구성과 비교한 단점은 +선언형 오브젝트 구성에 비해 단점은 다음과 같다. - 명령형 오브젝트 구성은 디렉토리가 아닌, 파일에 대해 가장 효과가 있다. -- 활성 오브젝트에 대한 업데이트는 구성 파일 내 반영되어야만 한다. 그렇지 않으면 다음 대체가 이루어지는 동안 유실 될 것이다. +- 활성 오브젝트에 대한 업데이트는 구성 파일에 반영되어야 한다. 그렇지 않으면 다음 교체 중에 손실된다. + ## 선언형 오브젝트 구성 선언형 오브젝트 구성을 사용할 경우, 사용자는 로컬에 보관된 오브젝트 구성 파일을 대상으로 작동시키지만, 사용자는 파일에서 수행 할 작업을 정의하지 않는다. 생성, 업데이트, 그리고 삭제 작업은 -`kubectl`에 의해 오브젝트 마다 자동으로 감지된다. 이것은 다른 오브젝트를 위해 필요할 수도 있는 -다른 작업에서, 디렉토리들을 대상으로 동작할 수 있도록 해준다. +`kubectl`에 의해 오브젝트 마다 자동으로 감지된다. 이를 통해 다른 오브젝트에 대해 +다른 조작이 필요할 수 있는 디렉토리에서 작업할 수 있다. {{< note >}} -선언형 오브젝트 구성은 비록 그 변경사항이 오브젝트 구성 파일로 -되돌려 병합될 수 없기는 하지만, 다른 작성자에 의해 이루어진 변경사항을 유지한다. -이는 전체 오브젝트 구성을 대체하기 위해 `replace` API 작업을 이용하는 대신, -오직 인지된 차이점을 기록하기 위한 `patch` -API 작업을 이용함으로서 가능하다. +선언형 오브젝트 구성은 변경 사항이 오브젝트 구성 파일에 +다시 병합되지 않더라도 다른 작성자가 작성한 변경 사항을 유지한다. +이것은 전체 오브젝트 구성 변경을 위한 `replace` API를 +사용하는 대신, `patch` API를 사용하여 인지되는 차이만 +작성하기 때문에 가능하다. {{< /note >}} ### 예시 @@ -163,24 +164,24 @@ kubectl apply -R -f configs/ ### 트레이드 오프 -명령형 오브젝트 구성과 비교한 장점은 +명령형 오브젝트 구성에 비해 장점은 다음과 같다. -- 구성 파일로 되돌려 병합될 수 없기는 하지만, 활성 오브젝트에 직접 이루어진 변경사항이 유지된다. -- 선언형 오브젝트 구성은 디렉토리에 관한 동작에 대해 더 나은 지원을 하고 자동으로 오브젝트 마다의 작업(생성, 패치, 삭제)을 감지한다. +- 활성 오브젝트에 직접 작성된 변경 사항은 구성 파일로 다시 병합되지 않더라도 유지된다. +- 선언형 오브젝트 구성은 디렉토리에서의 작업 및 오브젝트 별 작업 유형(생성, 패치, 삭제)의 자동 감지에 더 나은 지원을 제공한다. -명령형 오브젝트 구성과 비교한 단점은 +명령형 오브젝트 구성에 비해 단점은 다음과 같다. -- 선언형 오브젝트 구성은 예측이 불가할 경우 디버그 하기가 더 어렵고 결과를 이해하기가 더 어렵다. -- 차이점를 이용한 부분적 업데이트는 복잡한 병합과 패치 작업을 만들어 낸다. +- 선언형 오브젝트 구성은 예상치 못한 결과를 디버깅하고 이해하기가 더 어렵다. +- diff를 사용한 부분 업데이트는 복잡한 병합 및 패치 작업을 일으킨다. {{% /capture %}} {{% capture whatsnext %}} - [명령형 커맨드를 이용한 쿠버네티스 오브젝트 관리하기](/ko/docs/tasks/manage-kubernetes-objects/imperative-command/) -- [오브젝트 구성을 이용한 쿠버네티스 오브젝트 관리하기 (명령형)](/ko/docs/tasks/manage-kubernetes-objects/imperative-config/) -- [오브젝트 구성을 이용한 쿠버네티스 오브젝트 관리하기 (선언형)](/ko/docs/tasks/manage-kubernetes-objects/declarative-config/) -- [Kustomize를 사용한 쿠버네티스 오브젝트 관리하기 (선언형)](/docs/tasks/manage-kubernetes-objects/kustomization/) -- [Kubectl 명령어 참조](/docs/reference/generated/kubectl/kubectl-commands/) +- [오브젝트 구성을 이용한 쿠버네티스 오브젝트 관리하기(명령형)](/ko/docs/tasks/manage-kubernetes-objects/imperative-config/) +- [오브젝트 구성을 이용한 쿠버네티스 오브젝트 관리하기(선언형)](/ko/docs/tasks/manage-kubernetes-objects/declarative-config/) +- [Kustomize를 사용한 쿠버네티스 오브젝트 관리하기(선언형)](/ko/docs/tasks/manage-kubernetes-objects/kustomization/) +- [Kubectl 커맨드 참조](/docs/reference/generated/kubectl/kubectl-commands/) - [Kubectl 서적](https://kubectl.docs.kubernetes.io) - [쿠버네티스 API 참조](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) diff --git a/content/ko/docs/concepts/security/overview.md b/content/ko/docs/concepts/security/overview.md index a8f5f050a4b20..fda1f15984c9b 100644 --- a/content/ko/docs/concepts/security/overview.md +++ b/content/ko/docs/concepts/security/overview.md @@ -152,7 +152,7 @@ TLS를 통한 접근 | 코드가 TCP를 통해 통신해야 한다면, 클라이 {{% /capture %}} {{% capture whatsnext %}} -* [파드에 대한 네트워크 정책](/docs/concepts/services-networking/network-policies/) 알아보기 +* [파드에 대한 네트워크 정책](/ko/docs/concepts/services-networking/network-policies/) 알아보기 * [클러스터 보안](/docs/tasks/administer-cluster/securing-a-cluster/)에 대해 알아보기 * [API 접근 통제](/docs/reference/access-authn-authz/controlling-access/)에 대해 알아보기 * 컨트롤 플레인에 대한 [전송 데이터 암호화](/docs/tasks/tls/managing-tls-in-a-cluster/) 알아보기 diff --git a/content/ko/docs/concepts/services-networking/connect-applications-service.md b/content/ko/docs/concepts/services-networking/connect-applications-service.md index 2b79d11e882c7..ca2440a0484d0 100644 --- a/content/ko/docs/concepts/services-networking/connect-applications-service.md +++ b/content/ko/docs/concepts/services-networking/connect-applications-service.md @@ -418,10 +418,8 @@ LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.el {{% capture whatsnext %}} -게다가 쿠버네티스는 여러 클러스터와 클라우드 공급자들을 포괄하는 -페더레이션 서비스를 지원하여 가용성을 높이고 내결함성을 향상시키며, -보다 큰 서비스의 확장성을 제공한다. 자세한 내용은 -[페더레이션 서비스 사용자 가이드](/docs/concepts/cluster-administration/federation-service-discovery/) -를 본다. +* [서비스를 사용해서 클러스터 내 애플리케이션에 접근하기](/docs/tasks/access-application-cluster/service-access-application-cluster/)를 더 자세히 알아본다. +* [서비스를 사용해서 프론트 엔드부터 백 엔드까지 연결하기](/docs/tasks/access-application-cluster/connecting-frontend-backend/)를 더 자세히 알아본다. +* [외부 로드 밸런서를 생성하기](/docs/tasks/access-application-cluster/create-external-load-balancer/)를 더 자세히 알아본다. {{% /capture %}} diff --git a/content/ko/docs/concepts/services-networking/endpoint-slices.md b/content/ko/docs/concepts/services-networking/endpoint-slices.md index f40ff87993ff9..caf6bb1cd464d 100644 --- a/content/ko/docs/concepts/services-networking/endpoint-slices.md +++ b/content/ko/docs/concepts/services-networking/endpoint-slices.md @@ -30,6 +30,8 @@ term_id="selector" >}} 가 지정되면 EndpointSlice 컨트롤러는 자동으로 엔드포인트슬라이스를 생성한다. 이 엔드포인트슬라이스는 서비스 셀렉터와 매치되는 모든 파드들을 포함하고 참조한다. 엔드포인트슬라이스는 고유한 서비스와 포트 조합을 통해 네트워크 엔드포인트를 그룹화 한다. +EndpointSlice 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. 예를 들어, 여기에 `example` 쿠버네티스 서비스를 위한 EndpointSlice 리소스 샘플이 있다. diff --git a/content/ko/docs/concepts/services-networking/ingress.md b/content/ko/docs/concepts/services-networking/ingress.md index 64d610a49944b..03d8700cd2fa3 100644 --- a/content/ko/docs/concepts/services-networking/ingress.md +++ b/content/ko/docs/concepts/services-networking/ingress.md @@ -76,11 +76,13 @@ spec: servicePort: 80 ``` - 다른 모든 쿠버네티스 리소스와 마찬가지로 인그레스에는 `apiVersion`, `kind`, 그리고 `metadata` 필드가 필요하다. - 설정 파일의 작성에 대한 일반적인 내용은 [애플리케이션 배포하기](/docs/tasks/run-application/run-stateless-application-deployment/), [컨테이너 구성하기](/docs/tasks/configure-pod-container/configure-pod-configmap/), [리소스 관리하기](/docs/concepts/cluster-administration/manage-deployment/)를 참조한다. +다른 모든 쿠버네티스 리소스와 마찬가지로 인그레스에는 `apiVersion`, `kind`, 그리고 `metadata` 필드가 필요하다. +인그레스 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. +설정 파일의 작성에 대한 일반적인 내용은 [애플리케이션 배포하기](/docs/tasks/run-application/run-stateless-application-deployment/), [컨테이너 구성하기](/docs/tasks/configure-pod-container/configure-pod-configmap/), [리소스 관리하기](/docs/concepts/cluster-administration/manage-deployment/)를 참조한다. 인그레스는 종종 어노테이션을 이용해서 인그레스 컨트롤러에 따라 몇 가지 옵션을 구성하는데, 그 예시는 [재작성-타겟 어노테이션](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md)이다. - 다른 [인그레스 컨트롤러](/ko/docs/concepts/services-networking/ingress-controllers)는 다른 어노테이션을 지원한다. +다른 [인그레스 컨트롤러](/ko/docs/concepts/services-networking/ingress-controllers)는 다른 어노테이션을 지원한다. 지원되는 어노테이션을 확인하려면 선택한 인그레스 컨트롤러의 설명서를 검토한다. 인그레스 [사양](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) @@ -132,10 +134,10 @@ kubectl get ingress test-ingress ``` NAME HOSTS ADDRESS PORTS AGE -test-ingress * 107.178.254.228 80 59s +test-ingress * 203.0.113.123 80 59s ``` -여기서 `107.178.254.228` 는 인그레스 컨트롤러가 인그레스를 충족시키기 위해 +여기서 `203.0.113.123` 는 인그레스 컨트롤러가 인그레스를 충족시키기 위해 할당한 IP 이다. {{< note >}} diff --git a/content/ko/docs/concepts/services-networking/service.md b/content/ko/docs/concepts/services-networking/service.md index 6d47a2c2aa6b0..3072c277a0aeb 100644 --- a/content/ko/docs/concepts/services-networking/service.md +++ b/content/ko/docs/concepts/services-networking/service.md @@ -73,6 +73,8 @@ _서비스_ 로 들어가보자. 쿠버네티스의 서비스는 파드와 비슷한 REST 오브젝트이다. 모든 REST 오브젝트와 마찬가지로, 서비스 정의를 API 서버에 `POST`하여 새 인스턴스를 생성할 수 있다. +서비스 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. 예를 들어, 각각 TCP 포트 9376에서 수신하고 `app=MyApp` 레이블을 가지고 있는 파드 세트가 있다고 가정해 보자. @@ -167,6 +169,9 @@ subsets: - port: 9376 ``` +엔드포인트 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. + {{< note >}} 엔드포인트 IP는 루프백(loopback) (IPv4의 경우 127.0.0.0/8, IPv6의 경우 ::1/128), 또는 링크-로컬 (IPv4의 경우 169.254.0.0/16와 224.0.0.0/24, IPv6의 경우 fe80::/64)이 _되어서는 안된다_. @@ -1172,19 +1177,6 @@ SCTP는 Windows 기반 노드를 지원하지 않는다. kube-proxy는 유저스페이스 모드에 있을 때 SCTP 연결 관리를 지원하지 않는다. {{< /warning >}} -## 향후 작업 - -향후, 서비스에 대한 프록시 정책은 예를 들어, 마스터-선택 또는 샤드 같은 -단순한 라운드-로빈 밸런싱보다 미묘한 차이가 생길 수 있다. 또한 -일부 서비스에는 "실제" 로드 밸런서가 있을 것으로 예상되는데, 이 경우 -가상 IP 주소는 단순히 패킷을 그곳으로 전송한다. - -쿠버네티스 프로젝트는 L7 (HTTP) 서비스에 대한 지원을 개선하려고 한다. - -쿠버네티스 프로젝트는 현재 ClusterIP, NodePort 및 LoadBalancer 모드 등을 포함하는 서비스에 대해 -보다 유연한 인그레스 모드를 지원하려고 한다. - - {{% /capture %}} {{% capture whatsnext %}} diff --git a/content/ko/docs/concepts/storage/volume-pvc-datasource.md b/content/ko/docs/concepts/storage/volume-pvc-datasource.md index ab9f1db2ca4c5..92ea37f8cc2ee 100644 --- a/content/ko/docs/concepts/storage/volume-pvc-datasource.md +++ b/content/ko/docs/concepts/storage/volume-pvc-datasource.md @@ -56,6 +56,10 @@ spec: name: pvc-1 ``` +{{< note >}} +`spec.resources.requests.storage` 에 용량 값을 지정해야 하며, 지정한 값은 소스 볼륨의 용량과 같거나 또는 더 커야 한다. +{{< /note >}} + 그 결과로 지정된 소스 `pvc-1` 과 동일한 내용을 가진 `clone-of-pvc-1` 이라는 이름을 가지는 새로운 PVC가 생겨난다. ## 사용 diff --git a/content/ko/docs/concepts/storage/volumes.md b/content/ko/docs/concepts/storage/volumes.md index fa84c193c9865..95395f7587250 100644 --- a/content/ko/docs/concepts/storage/volumes.md +++ b/content/ko/docs/concepts/storage/volumes.md @@ -599,6 +599,38 @@ spec: type: Directory ``` +{{< caution >}} +`FileOrCreate` 모드에서는 파일의 상위 디렉터리가 생성되지 않는다. 마운트된 파일의 상위 디렉터리가 없으면 파드가 시작되지 않는다. 이 모드가 작동하도록 하기 위해 다음과 같이 디렉터리와 파일을 별도로 마운트할 수 있다. +{{< /caution >}} + +#### FileOrCreate 파드 예시 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: test-webserver +spec: + containers: + - name: test-webserver + image: k8s.gcr.io/test-webserver:latest + volumeMounts: + - mountPath: /var/local/aaa + name: mydir + - mountPath: /var/local/aaa/1.txt + name: myfile + volumes: + - name: mydir + hostPath: + # 파일 디렉터리가 생성되었는지 확인한다. + path: /var/local/aaa + type: DirectoryOrCreate + - name: myfile + hostPath: + path: /var/local/aaa/1.txt + type: FileOrCreate +``` + ### iscsi {#iscsi} `iscsi` 볼륨을 사용하면 기존 iSCSI (SCSI over IP) 볼륨을 파드에 마운트 @@ -1440,5 +1472,5 @@ sudo systemctl restart docker {{% capture whatsnext %}} -* [퍼시스턴트 볼륨과 함께 워드프레스와 MySQL 배포하기](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)의 예시를 따른다. +* [퍼시스턴트 볼륨과 함께 워드프레스와 MySQL 배포하기](/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)의 예시를 따른다. {{% /capture %}} diff --git a/content/ko/docs/concepts/workloads/controllers/cron-jobs.md b/content/ko/docs/concepts/workloads/controllers/cron-jobs.md index 4e912069ec921..4bbcf22ebb11d 100644 --- a/content/ko/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/ko/docs/concepts/workloads/controllers/cron-jobs.md @@ -8,7 +8,7 @@ weight: 80 {{< feature-state for_k8s_version="v1.8" state="beta" >}} -_크론 잡은_ 시간 기반의 일정에 따라 [잡](/docs/concepts/workloads/controllers/jobs-run-to-completion/)을 만든다. +_크론 잡은_ 시간 기반의 일정에 따라 [잡](/ko/docs/concepts/workloads/controllers/jobs-run-to-completion/)을 만든다. 하나의 크론잡 객체는 _크론탭_ (크론 테이블) 파일의 한 줄과 같다. 크론잡은 잡을 [크론](https://en.wikipedia.org/wiki/Cron)형식으로 쓰여진 주어진 일정에 따라 주기적으로 동작시킨다. @@ -23,7 +23,8 @@ _크론 잡은_ 시간 기반의 일정에 따라 [잡](/docs/concepts/workloads {{< /caution >}} 크론잡 리소스에 대한 매니페스트를 생성할때에는 제공하는 이름이 -52자 이하인지 확인해야 한다. 이는 크론잡 컨트롤러는 제공된 잡 이름에 +유효한 [DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. +이름은 52자 이하여야 한다. 이는 크론잡 컨트롤러는 제공된 잡 이름에 11자를 자동으로 추가하고, 작업 이름의 최대 길이는 63자라는 제약 조건이 있기 때문이다. diff --git a/content/ko/docs/concepts/workloads/controllers/daemonset.md b/content/ko/docs/concepts/workloads/controllers/daemonset.md index 9309a7520faeb..2a19576a0fd9e 100644 --- a/content/ko/docs/concepts/workloads/controllers/daemonset.md +++ b/content/ko/docs/concepts/workloads/controllers/daemonset.md @@ -33,7 +33,8 @@ YAML 파일로 데몬셋을 설명 할 수 있다. 예를 들어 아래 `daemons {{< codenew file="controllers/daemonset.yaml" >}} -* YAML 파일을 기반으로 데몬셋을 생성한다. +YAML 파일을 기반으로 데몬셋을 생성한다. + ``` kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml ``` @@ -44,6 +45,8 @@ kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml 일반적인 설정파일 작업에 대한 정보는 [애플리케이션 배포하기](/docs/tasks/run-application/run-stateless-application-deployment/), [컨테이너 구성하기](/ko/docs/tasks/) 그리고 [kubectl을 사용한 오브젝트 관리](/ko/docs/concepts/overview/working-with-objects/object-management/) 문서를 참고한다. +데몬셋 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. 데몬셋에는 [`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) 섹션도 필요하다. ### 파드 템플릿 @@ -61,7 +64,7 @@ kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml ### 파드 셀렉터 `.spec.selector` 필드는 파드 셀렉터이다. 이것은 -[잡](/docs/concepts/workloads/controllers/jobs-run-to-completion/)의 `.spec.selector` 와 같은 동작을 한다. +[잡](/ko/docs/concepts/workloads/controllers/jobs-run-to-completion/)의 `.spec.selector` 와 같은 동작을 한다. 쿠버네티스 1.8 부터는 레이블이 `.spec.template` 와 일치하는 파드 셀렉터를 명시해야 한다. 파드 셀렉터는 비워두면 더 이상 기본 값이 설정이 되지 않는다. diff --git a/content/ko/docs/concepts/workloads/controllers/deployment.md b/content/ko/docs/concepts/workloads/controllers/deployment.md index c79b13bc48c45..2d9097c7183f0 100644 --- a/content/ko/docs/concepts/workloads/controllers/deployment.md +++ b/content/ko/docs/concepts/workloads/controllers/deployment.md @@ -1018,6 +1018,8 @@ $ echo $? 다른 모든 쿠버네티스 설정과 마찬가지로 디플로이먼트에는 `apiVersion`, `kind` 그리고 `metadata` 필드가 필요하다. 설정 파일 작업에 대한 일반적인 내용은 [애플리케이션 배포하기](/docs/tutorials/stateless-application/run-stateless-application-deployment/), 컨테이너 구성하기 그리고 [kubectl을 사용해서 리소스 관리하기](/ko/docs/concepts/overview/working-with-objects/object-management/) 문서를 참조한다. +디플로이먼트 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. 디플로이먼트에는 [`.spec` 섹션](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)도 필요하다. diff --git a/content/ko/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/ko/docs/concepts/workloads/controllers/jobs-run-to-completion.md index 939b6e8293277..5aba53574b6fc 100644 --- a/content/ko/docs/concepts/workloads/controllers/jobs-run-to-completion.md +++ b/content/ko/docs/concepts/workloads/controllers/jobs-run-to-completion.md @@ -36,7 +36,7 @@ weight: 70 이 명령으로 예시를 실행할 수 있다. ```shell -kubectl apply -f https://k8s.io/examples/controllers/job.yaml +kubectl apply -f https://kubernetes.io/examples/controllers/job.yaml ``` ``` job.batch/pi created @@ -111,6 +111,7 @@ kubectl logs $pods ## 잡 사양 작성하기 다른 쿠버네티스의 설정과 마찬가지로 잡에는 `apiVersion`, `kind` 그리고 `metadata` 필드가 필요하다. +잡의 이름은 유효한 [DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. 잡에는 [`.spec` 섹션](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)도 필요하다. diff --git a/content/ko/docs/concepts/workloads/controllers/replicaset.md b/content/ko/docs/concepts/workloads/controllers/replicaset.md index 47b74587aa956..d98fbcd7cff5c 100644 --- a/content/ko/docs/concepts/workloads/controllers/replicaset.md +++ b/content/ko/docs/concepts/workloads/controllers/replicaset.md @@ -16,8 +16,8 @@ weight: 10 ## 레플리카셋의 작동 방식 -레플리카셋을 정의하는 필드는 획득 가능한 파드를 식별하는 방법이 명시된 셀렉터, 유지해야 하는 파드 개수를 명시하는 레플리카의 개수, -그리고 레플리카 수 유지를 위해 생성하는 신규 파드에 대한 데이터를 명시하는 파드 템플릿을 포함한다. +레플리카셋을 정의하는 필드는 획득 가능한 파드를 식별하는 방법이 명시된 셀렉터, 유지해야 하는 파드 개수를 명시하는 레플리카의 개수, +그리고 레플리카 수 유지를 위해 생성하는 신규 파드에 대한 데이터를 명시하는 파드 템플릿을 포함한다. 그러면 레플리카셋은 필드에 지정된 설정을 충족하기 위해 필요한 만큼 파드를 만들고 삭제한다. 레플리카셋이 새로운 파드를 생성해야 할 경우, 명시된 파드 템플릿을 사용한다. @@ -27,16 +27,16 @@ weight: 10 레플리카셋이 가지고 있는 모든 파드의 ownerReferences 필드는 해당 파드를 소유한 레플리카셋을 식별하기 위한 소유자 정보를 가진다. 이 링크를 통해 레플리카셋은 자신이 유지하는 파드의 상태를 확인하고 이에 따라 관리 한다. -레플리카셋은 셀렉터를 이용해서 필요한 새 파드를 식별한다. 만약 파드에 OwnerReference이 없거나 -OwnerReference가 {{< glossary_tooltip term_id="controller" >}} 가 아니고 레플리카셋의 셀렉터와 일치한다면 레플리카셋이 즉각 파드를 +레플리카셋은 셀렉터를 이용해서 필요한 새 파드를 식별한다. 만약 파드에 OwnerReference이 없거나 +OwnerReference가 {{< glossary_tooltip term_id="controller" >}} 가 아니고 레플리카셋의 셀렉터와 일치한다면 레플리카셋이 즉각 파드를 가지게 될 것이다. ## 레플리카셋을 사용하는 시기 레플리카셋은 지정된 수의 파드 레플리카가 항상 실행되도록 보장한다. -그러나 디플로이먼트는 레플리카셋을 관리하고 다른 유용한 기능과 함께 +그러나 디플로이먼트는 레플리카셋을 관리하고 다른 유용한 기능과 함께 파드에 대한 선언적 업데이트를 제공하는 상위 개념이다. -따라서 우리는 사용자 지정 오케스트레이션이 필요하거나 업데이트가 전혀 필요하지 않은 경우라면 +따라서 우리는 사용자 지정 오케스트레이션이 필요하거나 업데이트가 전혀 필요하지 않은 경우라면 레플리카셋을 직접적으로 사용하기 보다는 디플로이먼트를 사용하는 것을 권장한다. 이는 레플리카셋 오브젝트를 직접 조작할 필요가 없다는 것을 의미한다. @@ -46,7 +46,7 @@ OwnerReference가 {{< glossary_tooltip term_id="controller" >}} 가 아니고 {{< codenew file="controllers/frontend.yaml" >}} -이 매니페스트를 `frontend.yaml`에 저장하고 쿠버네티스 클러스터에 적용하면 정의되어있는 레플리카셋이 +이 매니페스트를 `frontend.yaml`에 저장하고 쿠버네티스 클러스터에 적용하면 정의되어있는 레플리카셋이 생성되고 레플리카셋이 관리하는 파드가 생성된다. ```shell @@ -54,22 +54,26 @@ kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml ``` 현재 배포된 레플리카셋을 확인할 수 있다. + ```shell kubectl get rs ``` 그리고 생성된 프런트엔드를 볼 수 있다. + ```shell NAME DESIRED CURRENT READY AGE frontend 3 3 3 6s ``` 또한 레플리카셋의 상태를 확인할 수 있다. + ```shell kubectl describe rs/frontend ``` 출력은 다음과 유사할 것이다. + ```shell Name: frontend Namespace: default @@ -99,11 +103,13 @@ Events: ``` 마지막으로 파드가 올라왔는지 확인할 수 있다. + ```shell kubectl get pods ``` 다음과 유사한 파드 정보를 볼 수 있다. + ```shell NAME READY STATUS RESTARTS AGE frontend-b2zdv 1/1 Running 0 6m36s @@ -113,11 +119,13 @@ frontend-wtsmm 1/1 Running 0 6m36s 또한 파드들의 소유자 참조 정보가 해당 프런트엔드 레플리카셋으로 설정되어 있는지 확인할 수 있다. 확인을 위해서는 실행 중인 파드 중 하나의 yaml을 확인한다. + ```shell kubectl get pods frontend-b2zdv -o yaml ``` 메타데이터의 ownerReferences 필드에 설정되어있는 프런트엔드 레플리카셋의 정보가 다음과 유사하게 나오는 것을 볼 수 있다. + ```shell apiVersion: v1 kind: Pod @@ -141,32 +149,34 @@ metadata: ## 템플릿을 사용하지 않는 파드의 획득 단독(bare) 파드를 생성하는 것에는 문제가 없지만, 단독 파드가 레플리카셋의 셀렉터와 일치하는 레이블을 가지지 -않도록 하는 것을 강력하게 권장한다. 그 이유는 레플리카셋이 소유하는 파드가 템플릿에 명시된 파드에만 국한되지 않고, +않도록 하는 것을 강력하게 권장한다. 그 이유는 레플리카셋이 소유하는 파드가 템플릿에 명시된 파드에만 국한되지 않고, 이전 섹션에서 명시된 방식에 의해서도 다른 파드의 획득이 가능하기 때문이다. 이전 프런트엔드 레플리카셋 예제와 다음의 매니페스트에 명시된 파드를 가져와 참조한다. {{< codenew file="pods/pod-rs.yaml" >}} -기본 파드는 소유자 관련 정보에 컨트롤러(또는 오브젝트)를 가지지 않기 때문에 프런트엔드 +기본 파드는 소유자 관련 정보에 컨트롤러(또는 오브젝트)를 가지지 않기 때문에 프런트엔드 레플리카셋의 셀렉터와 일치하면 즉시 레플리카셋에 소유된다. -프런트엔드 레플리카셋이 배치되고 초기 파드 레플리카가 셋업된 이후에, 레플리카 수 요구 사항을 충족시키기 위해서 +프런트엔드 레플리카셋이 배치되고 초기 파드 레플리카가 셋업된 이후에, 레플리카 수 요구 사항을 충족시키기 위해서 신규 파드를 생성한다고 가정해보자. ```shell kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml ``` -새로운 파드는 레플리카셋에 의해 인식되며 레플리카셋이 필요한 수량을 초과하면 +새로운 파드는 레플리카셋에 의해 인식되며 레플리카셋이 필요한 수량을 초과하면 즉시 종료된다. 파드를 가져온다. + ```shell kubectl get pods ``` 결과에는 새로운 파드가 이미 종료되었거나 종료가 진행 중인 것을 보여준다. + ```shell NAME READY STATUS RESTARTS AGE frontend-b2zdv 1/1 Running 0 10m @@ -177,17 +187,20 @@ pod2 0/1 Terminating 0 1s ``` 파드를 먼저 생성한다. + ```shell kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml ``` 그 다음 레플리카셋을 생성한다. + ```shell kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml ``` -레플리카셋이 해당 파드를 소유한 것을 볼 수 있으며 새 파드 및 기존 파드의 수가 +레플리카셋이 해당 파드를 소유한 것을 볼 수 있으며 새 파드 및 기존 파드의 수가 레플리카셋이 필요로 하는 수와 일치할 때까지 사양에 따라 신규 파드만 생성한다. 파드를 가져온다. + ```shell kubectl get pods ``` @@ -209,6 +222,9 @@ pod2 1/1 Running 0 36s 쿠버네티스 1.9에서의 레플리카셋의 kind에 있는 API 버전 `apps/v1`은 현재 버전이며, 기본으로 활성화 되어있다. API 버전 `apps/v1beta2`은 사용 중단(deprecated)되었다. API 버전에 대해서는 `frontend.yaml` 예제의 첫 번째 줄을 참고한다. +레플리카셋 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. + 레플리카셋도 [`.spec` 섹션](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)이 필요하다. ### 파드 템플릿 @@ -230,7 +246,7 @@ matchLabels: tier: frontend ``` -레플리카셋에서 `.spec.template.metadata.labels`는 `spec.selector`과 일치해야 하며 +레플리카셋에서 `.spec.template.metadata.labels`는 `spec.selector`과 일치해야 하며 그렇지 않으면 API에 의해 거부된다. {{< note >}} @@ -250,7 +266,7 @@ matchLabels: 레플리카셋 및 모든 파드를 삭제하려면 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete)를 사용한다. [가비지 수집기](/ko/docs/concepts/workloads/controllers/garbage-collection/)는 기본적으로 종속되어있는 모든 파드를 자동으로 삭제한다. -REST API또는 `client-go` 라이브러리를 이용할 때는 -d 옵션으로 `propagationPolicy`를 `Background`또는 `Foreground`로 +REST API또는 `client-go` 라이브러리를 이용할 때는 -d 옵션으로 `propagationPolicy`를 `Background`또는 `Foreground`로 설정해야 한다. 예시: ```shell @@ -275,12 +291,12 @@ curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/fron 원본이 삭제되면 새 레플리카셋을 생성해서 대체할 수 있다. 기존 `.spec.selector`와 신규 `.spec.selector`가 같으면 새 레플리카셋은 기존 파드를 선택한다. 하지만 신규 레플리카셋은 기존 파드를 신규 레플리카셋의 새롭고 다른 파드 템플릿에 일치시키는 작업을 수행하지는 않는다. -컨트롤 방식으로 파드를 새로운 사양으로 업데이트 하기 위해서는 [디플로이먼트](/ko/docs/concepts/workloads/controllers/deployment/#디플로이먼트-생성)를 이용하면 된다. +컨트롤 방식으로 파드를 새로운 사양으로 업데이트 하기 위해서는 [디플로이먼트](/ko/docs/concepts/workloads/controllers/deployment/#디플로이먼트-생성)를 이용하면 된다. 이는 레플리카셋이 롤링 업데이트를 직접적으로 지원하지 않기 때문이다. ### 레플리카셋에서 파드 격리 -레이블을 변경하면 레플리카셋에서 파드를 제거할 수 있다. 이 방식은 디버깅과 데이터 복구 등을 +레이블을 변경하면 레플리카셋에서 파드를 제거할 수 있다. 이 방식은 디버깅과 데이터 복구 등을 위해 서비스에서 파드를 제거하는 데 사용할 수 있다. 이 방식으로 제거된 파드는 자동으로 교체된다( 레플리카의 수가 변경되지 않는다고 가정한다). @@ -291,15 +307,15 @@ curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/fron ### 레플리카셋을 Horizontal Pod Autoscaler 대상으로 설정 -레플리카 셋은 +레플리카 셋은 [Horizontal Pod Autoscalers (HPA)](/ko/docs/tasks/run-application/horizontal-pod-autoscale/)의 대상이 될 수 있다. 즉, 레플리카셋은 HPA에 의해 오토스케일될 수 있다. 다음은 이전에 만든 예시에서 만든 레플리카셋을 대상으로 하는 HPA 예시이다. {{< codenew file="controllers/hpa-rs.yaml" >}} -이 매니페스트를 `hpa-rs.yaml`로 저장한 다음 쿠버네티스 -클러스터에 적용하면 CPU 사용량에 따라 파드가 복제되는 +이 매니페스트를 `hpa-rs.yaml`로 저장한 다음 쿠버네티스 +클러스터에 적용하면 CPU 사용량에 따라 파드가 복제되는 오토스케일 레플리카 셋 HPA가 생성된다. ```shell @@ -317,7 +333,7 @@ kubectl autoscale rs frontend --max=10 ### 디플로이먼트(권장) -[`디플로이먼트`](/ko/docs/concepts/workloads/controllers/deployment/)는 레플리카셋을 소유하거나 업데이트를 하고, +[`디플로이먼트`](/ko/docs/concepts/workloads/controllers/deployment/)는 레플리카셋을 소유하거나 업데이트를 하고, 파드의 선언적인 업데이트와 서버측 롤링 업데이트를 할 수 있는 오브젝트이다. 레플리카셋은 단독으로 사용할 수 있지만, 오늘날에는 주로 디플로이먼트로 파드의 생성과 삭제 그리고 업데이트를 오케스트레이션하는 메커니즘으로 사용한다. 디플로이먼트를 이용해서 배포할 때 생성되는 레플리카셋을 관리하는 것에 대해 걱정하지 않아도 된다. @@ -335,14 +351,14 @@ kubectl autoscale rs frontend --max=10 ### 데몬셋 -머신 모니터링 또는 머신 로깅과 같은 머신-레벨의 기능을 제공하는 파드를 위해서는 레플리카셋 대신 +머신 모니터링 또는 머신 로깅과 같은 머신-레벨의 기능을 제공하는 파드를 위해서는 레플리카셋 대신 [`데몬셋`](/ko/docs/concepts/workloads/controllers/daemonset/)을 사용한다. -이러한 파드의 수명은 머신의 수명과 연관되어 있고, 머신에서 다른 파드가 시작하기 전에 실행되어야 하며, +이러한 파드의 수명은 머신의 수명과 연관되어 있고, 머신에서 다른 파드가 시작하기 전에 실행되어야 하며, 머신의 재부팅/종료가 준비되었을 때, 해당 파드를 종료하는 것이 안전하다. ### 레플리케이션 컨트롤러 레플리카셋은 [_레플리케이션 컨트롤러_](/ko/docs/concepts/workloads/controllers/replicationcontroller/)를 계승하였다. -이 두 개의 용도는 동일하고, 유사하게 동작하며, 레플리케이션 컨트롤러가 [레이블 사용자 가이드](/ko/docs/concepts/overview/working-with-objects/labels/#레이블-셀렉터)에 +이 두 개의 용도는 동일하고, 유사하게 동작하며, 레플리케이션 컨트롤러가 [레이블 사용자 가이드](/ko/docs/concepts/overview/working-with-objects/labels/#레이블-셀렉터)에 설명된 설정-기반의 셀렉터의 요건을 지원하지 않는다는 점을 제외하면 유사하다. 따라서 레플리카셋이 레플리케이션 컨트롤러보다 선호된다. diff --git a/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md b/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md index 245bbf464c6f2..1b80519f02fe6 100644 --- a/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/ko/docs/concepts/workloads/controllers/replicationcontroller.md @@ -113,6 +113,8 @@ nginx-3ntk0 nginx-4ok8v nginx-qrm3m ## 레플리케이션 컨트롤러의 Spec 작성 다른 모든 쿠버네티스 컨피그와 마찬가지로 레플리케이션 컨트롤러는 `apiVersion`, `kind`, `metadata` 와 같은 필드가 필요하다. +레플리케이션 컨트롤러 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. 컨피그 파일의 동작에 관련된 일반적인 정보는 다음을 참조하라 [쿠버네티스 오브젝트 관리 ](/ko/docs/concepts/overview/working-with-objects/object-management/). 레플리케이션 컨트롤러는 또한 [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) 도 필요하다. @@ -254,7 +256,7 @@ API 오브젝트에 대한 더 자세한 것은 ### 레플리카셋 -[`레플리카셋`](/docs/concepts/workloads/controllers/replicaset/)은 새로운 [집합성 기준 레이블 셀렉터](/ko/docs/concepts/overview/working-with-objects/labels/#집합성-기준-요건) 이다. +[`레플리카셋`](/ko/docs/concepts/workloads/controllers/replicaset/)은 새로운 [집합성 기준 레이블 셀렉터](/ko/docs/concepts/overview/working-with-objects/labels/#집합성-기준-요건) 이다. 이것은 주로 [`디플로이먼트`](/ko/docs/concepts/workloads/controllers/deployment/) 에 의해 파드의 생성, 삭제 및 업데이트를 오케스트레이션 하는 메커니즘으로 사용된다. 사용자 지정 업데이트 조정이 필요하거나 업데이트가 필요하지 않은 경우가 아니면 레플리카 셋을 직접 사용하는 대신 디플로이먼트를 사용하는 것이 좋다. diff --git a/content/ko/docs/concepts/workloads/controllers/statefulset.md b/content/ko/docs/concepts/workloads/controllers/statefulset.md index 0fd4c8bb1f563..88b975ec6e540 100644 --- a/content/ko/docs/concepts/workloads/controllers/statefulset.md +++ b/content/ko/docs/concepts/workloads/controllers/statefulset.md @@ -100,11 +100,16 @@ spec: * 이름이 nginx라는 헤드리스 서비스는 네트워크 도메인을 컨트롤하는데 사용 한다. * 이름이 web인 스테이트풀셋은 3개의 nginx 컨테이너의 레플리카가 고유의 파드에서 구동될 것이라 지시하는 Spec을 갖는다. * volumeClaimTemplates은 퍼시스턴트 볼륨 프로비저너에서 프로비전한 [퍼시스턴트 볼륨](/docs/concepts/storage/persistent-volumes/)을 사용해서 안정적인 스토리지를 제공한다. +스테이트풀셋 오브젝트의 이름은 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)이어야 한다. + ## 파드 셀렉터 + 스테이트풀셋의 `.spec.selector` 필드는 `.spec.template.metadata.labels` 레이블과 일치하도록 설정 해야 한다. 쿠버네티스 1.8 이전에서는 생략시에 `.spec.selector` 필드가 기본 설정 되었다. 1.8 과 이후 버전에서는 파드 셀렉터를 명시하지 않으면 스테이트풀셋 생성시 유효성 검증 오류가 발생하는 결과가 나오게 된다. ## 파드 신원 + 스테이트풀셋 파드는 순서, 안정적인 네트워크 신원 그리고 안정적인 스토리지로 구성되는 고유한 신원을 가진다. 신원은 파드가 어떤 노드에 있고, (재)스케줄과도 상관없이 파드에 붙어있다. diff --git a/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md b/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md index 48a8bdb303db2..aefccc9243a01 100644 --- a/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md +++ b/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md @@ -10,7 +10,7 @@ weight: 65 TTL 컨트롤러는 실행이 완료된 리소스 오브젝트의 수명을 제한하는 TTL (time to live) 메커니즘을 제공한다. TTL 컨트롤러는 현재 -[잡(Job)](/docs/concepts/workloads/controllers/jobs-run-to-completion/)만 +[잡(Job)](/ko/docs/concepts/workloads/controllers/jobs-run-to-completion/)만 처리하며, 파드와 커스텀 리소스와 같이 실행을 완료할 다른 리소스를 처리하도록 확장될 수 있다. @@ -28,7 +28,7 @@ TTL 컨트롤러는 실행이 완료된 리소스 오브젝트의 수명을 ## TTL 컨트롤러 현재의 TTL 컨트롤러는 잡만 지원한다. 클러스터 운영자는 -[예시](/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically) +[예시](/ko/docs/concepts/workloads/controllers/jobs-run-to-completion/#완료된-잡을-자동으로-정리) 와 같이 `.spec.ttlSecondsAfterFinished` 필드를 명시하여 완료된 잡(`완료` 또는 `실패`)을 자동으로 정리하기 위해 이 기능을 사용할 수 있다. 리소스의 작업이 완료된 TTL 초(sec) 후 (다른 말로는, TTL이 만료되었을 때), @@ -79,7 +79,7 @@ TTL 컨트롤러는 쿠버네티스 리소스에 {{% capture whatsnext %}} -[자동으로 잡 정리](/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically) +[자동으로 잡 정리](/ko/docs/concepts/workloads/controllers/jobs-run-to-completion/#완료된-잡을-자동으로-정리) [디자인 문서](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md) diff --git a/content/ko/docs/concepts/workloads/pods/disruptions.md b/content/ko/docs/concepts/workloads/pods/disruptions.md index 5dc6f75d328ea..da0e5752ab07e 100644 --- a/content/ko/docs/concepts/workloads/pods/disruptions.md +++ b/content/ko/docs/concepts/workloads/pods/disruptions.md @@ -46,7 +46,7 @@ weight: 60 클러스터 관리자의 작업: - 복구 또는 업그레이드를 위한 [노드 드레이닝](/docs/tasks/administer-cluster/safely-drain-node/). -- 클러스터의 스케일 축소를 위한 노드 드레이닝([클러스터 오토스케일링](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaler)에 대해 알아보기). +- 클러스터의 스케일 축소를 위한 노드 드레이닝([클러스터 오토스케일링](/ko/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaler)에 대해 알아보기). - 노드에 다른 무언가를 추가하기 위해 파드를 제거. 위 작업은 클러스터 관리자가 직접 수행하거나 자동화를 통해 수행하며, diff --git a/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md b/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md index 6811cd7e0a079..42f86781c5118 100644 --- a/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/ko/docs/concepts/workloads/pods/pod-lifecycle.md @@ -243,7 +243,7 @@ status: 파드의 새로운 조건들은 쿠버네티스의 [레이블 키 포멧](/ko/docs/concepts/overview/working-with-objects/labels/#구문과-캐릭터-셋)을 준수해야 한다. `kubectl patch` 명령어가 오브젝트 상태 패치(patching)를 아직 제공하지 않기 때문에, -새로운 파드 조건들은 [KubeClient 라이브러리](/docs/reference/using-api/client-libraries/)를 통한 `PATCH` 액션을 통해서 주입되어야 한다. +새로운 파드 조건들은 [KubeClient 라이브러리](/ko/docs/reference/using-api/client-libraries/)를 통한 `PATCH` 액션을 통해서 주입되어야 한다. 새로운 파드 조건들이 적용된 경우, 파드는 **오직** 다음 두 문장이 모두 참일 때만 준비 상태로 평가된다. diff --git a/content/ko/docs/concepts/workloads/pods/pod.md b/content/ko/docs/concepts/workloads/pods/pod.md index 4196a5fbb7462..edeb4b3a1f734 100644 --- a/content/ko/docs/concepts/workloads/pods/pod.md +++ b/content/ko/docs/concepts/workloads/pods/pod.md @@ -165,8 +165,8 @@ _컨테이너의 어피니티(affinity) 기반 공동 스케줄링을 지원하 1. API 서버 안의 파드는 유예 기간에 따라, 시간을 넘은 것(죽은)것으로 간주되는 파드가 업데이트 된다. 1. 클라이언트 명령에서 파드는 "Terminating" 이라는 문구를 나타낸다. 1. (3번 단계와 동시에) Kubelet은 파드가 2번 단계에서 설정된 시간으로 인해 Terminating으로 표시되는 것을 확인하면 파드 종료 단계를 시작한다. - 1. 파드의 컨테이너 중 하나에 [preStop hook](/ko/docs/concepts/containers/container-lifecycle-hooks/#hook-details)이 정의된 경우, 해당 컨테이너 내부에서 실행된다. 유예 기간이 만료된 후에도 `preStop` 훅이 계속 실행 중이면, 유예 기간을 짧게(2초) 연장해서 2번 단계를 실행한다. - 1. 파드의 프로세스에 TERM 시그널이 전달된다. 파드의 모든 컨테이너가 TERM 시그널을 동시에 받기 때문에 컨테이너의 종료 순서가 중요한 경우에는 `preStop` 훅이 각각 필요할 수 있음을 알아두자. + 1. 파드의 컨테이너 중 하나에 [preStop hook](/ko/docs/concepts/containers/container-lifecycle-hooks/#hook-details)이 정의된 경우, 해당 컨테이너 내부에서 실행된다. 유예 기간이 만료된 후에도 `preStop` 훅이 계속 실행 중이면, 유예 기간을 짧게(2초)를 1회 연장해서 2번 단계를 실행한다. + 1. 파드의 프로세스에 TERM 시그널이 전달된다. 파드의 모든 컨테이너가 TERM 시그널을 동시에 받기 때문에 컨테이너의 종료 순서가 중요한 경우에는 `preStop` 훅이 각각 필요할 수 있음을 알아두자. 만약 `preStop` 훅을 완료하는 데 더 오랜 시간이 필요한 경우 `terminationGracePeriodSeconds` 를 수정해야 한다. 1. (3번 단계와 동시에) 파드는 서비스를 위해 엔드포인트 목록에서 제거되며, 더 이상 레플리케이션 컨트롤러가 실행중인 파드로 고려하지 않는다. 느리게 종료되는 파드는 로드밸런서(서비스 프록시와 같은)의 로테이션에서 지워지기 때문에 트래픽을 계속 처리할 수 없다. 1. 유예 기간이 만료되면, 파드에서 실행중이던 모든 프로세스가 SIGKILL로 종료된다. @@ -200,6 +200,7 @@ spec.containers[0].securityContext.privileged: forbidden '<*>(0xc20b222db0)true' 파드는 쿠버네티스 REST API에서 최상위 리소스이다. API 오브젝트에 더 자세한 정보는 아래 내용을 참조한다: [파드 API 오브젝트](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core). - +파드 오브젝트에 대한 매니페스트를 생성할때는 지정된 이름이 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)인지 확인해야 한다. {{% /capture %}} diff --git a/content/ko/docs/contribute/participating.md b/content/ko/docs/contribute/participating.md index f1abbfc27c9e7..5334d859cecd2 100644 --- a/content/ko/docs/contribute/participating.md +++ b/content/ko/docs/contribute/participating.md @@ -36,14 +36,14 @@ SIG Docs는 모든 컨트리뷰터의 콘텐츠와 리뷰를 환영한다. ## 역할과 책임 - **모든 사람** 은 쿠버네티스 문서에 기여할 수 있다. 기여시 [CLA에 서명](/docs/contribute/start#sign-the-cla)하고 GitHub 계정을 가지고 있어야 한다. -- 쿠버네티스 조직의 **맴버** 는 쿠버네티스 프로젝트에 시간과 노력을 투자한 기여자이다. 일반적으로 승인되는 변경이 되는 풀 리퀘스트를 연다. 맴버십 기준은 [커뮤니티 맴버십](https://github.com/kubernetes/community/blob/master/community-membership.md)을 참조한다. +- 쿠버네티스 조직의 **멤버** 는 쿠버네티스 프로젝트에 시간과 노력을 투자한 기여자이다. 일반적으로 승인되는 변경이 되는 풀 리퀘스트를 연다. 멤버십 기준은 [커뮤니티 멤버십](https://github.com/kubernetes/community/blob/master/community-membership.md)을 참조한다. - SIG Docs의 **리뷰어** 는 쿠버네티스 조직의 일원으로 문서 풀 리퀘스트에 관심을 표명했고, SIG Docs 승인자에 의해 GitHub 리포지터리에 있는 GitHub 그룹과 `OWNER` 파일에 추가되었다. -- SIG Docs의 **승인자** 는 프로젝트에 대한 지속적인 헌신을 보여준 - 좋은 맴버이다. 승인자는 쿠버네티스 조직을 대신해서 - 풀 리퀘스트를 병합하고 컨텐츠를 게시할 수 있다. +- SIG Docs의 **승인자** 는 프로젝트에 대한 지속적인 헌신을 보여준 + 좋은 멤버이다. 승인자는 쿠버네티스 조직을 대신해서 + 풀 리퀘스트를 병합하고 컨텐츠를 게시할 수 있다. 또한 승인자는 더 큰 쿠버네티스 커뮤니티의 SIG Docs를 대표할 수 있다. 릴리즈 조정과 같은 SIG Docs 승인자의 일부 의무에는 상당한 시간 투입이 필요하다. @@ -58,18 +58,18 @@ SIG Docs는 모든 컨트리뷰터의 콘텐츠와 리뷰를 환영한다. - [슬랙](http://slack.k8s.io/) 또는 [SIG docs 메일링 리스트](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)에 개선할 아이디어를 제시한다. - `/lgtm` Prow 명령 ("looks good to me" 의 줄임말)을 사용해서 병합을 위한 풀 리퀘스트의 변경을 추천한다. {{< note >}} - 만약 쿠버네티스 조직의 맴버가 아니라면, `/lgtm` 을 사용하는 것은 자동화된 시스템에 아무런 영향을 주지 않는다. + 만약 쿠버네티스 조직의 멤버가 아니라면, `/lgtm` 을 사용하는 것은 자동화된 시스템에 아무런 영향을 주지 않는다. {{< /note >}} -[CLS에 서명](/docs/contribute/start#sign-the-cla) 후에 누구나 다음을 할 수 있다. +[CLA에 서명](/docs/contribute/start#sign-the-cla) 후에 누구나 다음을 할 수 있다. - 기존 콘텐츠를 개선하거나, 새 콘텐츠를 추가하거나, 블로그 게시물 또는 사례연구 작성을 위해 풀 리퀘스트를 연다. -## 맴버 +## 멤버 -맴버는 [맴버 기준](https://github.com/kubernetes/community/blob/master/community-membership.md#member)을 충족하는 쿠버네티스 프로젝트에 기여한 사람들이다. SIG Docs는 쿠버네티스 커뮤니티의 모든 맴버로부터 기여를 환경하며, -기술적 정확성에 대한 다른 SIG 맴버들의 검토를 수시로 요청한다. +멤버는 [멤버 기준](https://github.com/kubernetes/community/blob/master/community-membership.md#member)을 충족하는 쿠버네티스 프로젝트에 기여한 사람들이다. SIG Docs는 쿠버네티스 커뮤니티의 모든 멤버로부터 기여를 환경하며, +기술적 정확성에 대한 다른 SIG 멤버들의 검토를 수시로 요청한다. -쿠버네티스 조직의 모든 맴버는 다음 작업을 할 수 있다. +쿠버네티스 조직의 모든 멤버는 다음 작업을 할 수 있다. - [모든 사람](#모든-사람) 하위에 나열된 모든 것 - 풀 리퀘스트 코멘트에 `/lgtm` 을 사용해서 LGTM(looks good to me) 레이블을 붙일 수 있다. @@ -106,7 +106,7 @@ SIG Docs는 모든 컨트리뷰터의 콘텐츠와 리뷰를 환영한다. 해당 GitHub 이슈를 종료한다. 축하한다, 이제 멤버가 되었다! -만약 맴버십 요청이 받아들여지지 않으면, +만약 멤버십 요청이 받아들여지지 않으면, 멤버십 위원회에서 재지원 전에 필요한 정보나 단계를 알려준다. @@ -117,7 +117,7 @@ SIG Docs는 모든 컨트리뷰터의 콘텐츠와 리뷰를 환영한다. GitHub 그룹의 멤버이다. 리뷰어는 문서 풀 리퀘스트를 리뷰하고 제안받은 변경에 대한 피드백을 제공한다. 리뷰어는 다음 작업을 수행할 수 있다. -- [모든 사람](#모든-사람)과 [맴버](#맴버)에 나열된 모든 것을 수행 +- [모든 사람](#모든-사람)과 [멤버](#멤버)에 나열된 모든 것을 수행 - 새 기능의 문서화 - 이슈 해결 및 분류 - 풀 리퀘스트 리뷰와 구속력있는 피드백 제공 @@ -171,7 +171,7 @@ GitHub 그룹의 멤버이다. [SIG Docs 팀과 자동화](#sig-docs-팀과-자 승인자는 다음의 작업을 할 수 있다. -- [모든 사람](#모든-사람), [맴버](#맴버) 그리고 [리뷰어](#리뷰어) 하위의 모든 목록을 할 수 있다. +- [모든 사람](#모든-사람), [멤버](#멤버) 그리고 [리뷰어](#리뷰어) 하위의 모든 목록을 할 수 있다. - 코멘트에 `/approve` 를 사용해서 풀 리퀘스트를 승인하고, 병합해서 기여자의 컨텐츠를 게시한다. 만약 승인자가 아닌 사람이 코멘트에 승인을 남기면 자동화 시스템에서 이를 무시한다. - 쿠버네티스 릴리즈팀에 문서 담당자로 참여 @@ -292,10 +292,10 @@ PR 소유자에게 조언하는데 활용된다. - 풀 리퀘스트에 `lgtm` 과 `approve` 레이블이 있고, `hold` 레이블이 없고, 모든 테스트를 통과하면 풀 리퀘스트는 자동으로 병합된다. -- 쿠버네티스 조직의 맴버와 SIG Docs 승인자들은 지정된 풀 리퀘스트의 +- 쿠버네티스 조직의 멤버와 SIG Docs 승인자들은 지정된 풀 리퀘스트의 자동 병합을 방지하기 위해 코멘트를 추가할 수 있다(코멘트에 `/hold` 추가 또는 `/lgtm` 코멘트 보류). -- 모든 쿠버네티스 맴버는 코멘트에 `/lgtm` 을 추가해서 `lgtm` 레이블을 추가할 수 있다. +- 모든 쿠버네티스 멤버는 코멘트에 `/lgtm` 을 추가해서 `lgtm` 레이블을 추가할 수 있다. - SIG Docs 승인자들만이 코멘트에 `/approve` 를 추가해서 풀 리퀘스트를 병합할 수 있다. 일부 승인자들은 [PR Wrangler](#pr-wrangler) 또는 [SIG Docs 의장](#sig-docs-의장)과 diff --git a/content/ko/docs/home/_index.md b/content/ko/docs/home/_index.md index d542f5a1ddaf0..1a12f2028ac1d 100644 --- a/content/ko/docs/home/_index.md +++ b/content/ko/docs/home/_index.md @@ -14,6 +14,8 @@ menu: weight: 20 post: >

개념, 튜토리얼 및 참조 문서와 함께 쿠버네티스 사용하는 방법을 익힐 수 있다. 또한, 문서에 기여하는 것도 도움을 줄 수 있다!

+description: > + 쿠버네티스는 컨테이너화된 애플리케이션의 배포, 확장 및 관리를 자동화하기 위한 오픈소스 컨테이너 오케스트레이션 엔진이다. 오픈소스 프로젝트는 Cloud Native Computing Foundation에서 주관한다. overview: > 쿠버네티스는 배포, 스케일링, 그리고 컨테이너화된 애플리케이션의 관리를 자동화 해주는 오픈 소스 컨테이너 오케스트레이션 엔진이다. 본 오픈 소스 프로젝트는 Cloud Native Computing Foundation(CNCF)가 주관한다. cards: @@ -37,6 +39,11 @@ cards: description: "일반적인 태스크와 이를 수행하는 방법을 여러 단계로 구성된 짧은 시퀀스를 통해 살펴본다." button: "태스크 보기" button_path: "/ko/docs/tasks" +- name: training + title: 교육" + description: "공인 쿠버네티스 인증을 획득하고 클라우드 네이티브 프로젝트를 성공적으로 수행하세요!" + button: "교육 보기" + button_path: "/training" - name: reference title: 레퍼런스 정보 찾기 description: 용어, 커맨드 라인 구문, API 자원 종류, 그리고 설치 툴 문서를 살펴본다. diff --git a/content/ko/docs/reference/_index.md b/content/ko/docs/reference/_index.md index 7bf1eb457953a..ada3246855fc2 100644 --- a/content/ko/docs/reference/_index.md +++ b/content/ko/docs/reference/_index.md @@ -22,7 +22,7 @@ content_template: templates/concept ## API 클라이언트 라이브러리 프로그래밍 언어에서 쿠버네티스 API를 호출하기 위해서, -[클라이언트 라이브러리](/docs/reference/using-api/client-libraries/)를 사용할 수 있다. +[클라이언트 라이브러리](/ko/docs/reference/using-api/client-libraries/)를 사용할 수 있다. 공식적으로 지원되는 클라이언트 라이브러리는 다음과 같다. - [쿠버네티스 Go 클라이언트 라이브러리](https://github.com/kubernetes/client-go/) diff --git a/content/ko/docs/reference/glossary/annotation.md b/content/ko/docs/reference/glossary/annotation.md index 36b814c1588ed..9232be81d68bb 100755 --- a/content/ko/docs/reference/glossary/annotation.md +++ b/content/ko/docs/reference/glossary/annotation.md @@ -14,5 +14,5 @@ tags: -어노테이션으로 된 메타데이터는 작거나 클 수 있고, 구조화되어 있거나 구조화되어 있지 않을 수도 있고, 레이블에서는 허용되지 않는 문자도 포함할 수 있다. 툴과 라이브러리와 같은 클라이언트로 메타데이터를 검색할 수 있다. +어노테이션으로 된 메타데이터는 작거나 클 수 있고, 구조화되어 있거나 구조화되어 있지 않을 수도 있고, {{< glossary_tooltip text="레이블" term_id="label" >}}에서는 허용되지 않는 문자도 포함할 수 있다. 툴과 라이브러리와 같은 클라이언트로 메타데이터를 검색할 수 있다. diff --git a/content/ko/docs/reference/glossary/cluster.md b/content/ko/docs/reference/glossary/cluster.md index a4a42b32aa3e3..9f4827b52b3fd 100755 --- a/content/ko/docs/reference/glossary/cluster.md +++ b/content/ko/docs/reference/glossary/cluster.md @@ -4,7 +4,7 @@ id: cluster date: 2019-06-15 full_link: short_description: > - 컨테이너화된 애플리케이션을 실행하는 노드라고 하는 워커 머신의 집합. 모든 클러스터는 최소 한 개의 워커 노드를 가진다. + 컨테이너화된 애플리케이션을 실행하는 {{< glossary_tooltip text="노드" term_id="node" >}}라고 하는 워커 머신의 집합. 모든 클러스터는 최소 한 개의 워커 노드를 가진다. aka: tags: @@ -14,4 +14,9 @@ tags: 컨테이너화된 애플리케이션을 실행하는 노드라고 하는 워커 머신의 집합. 모든 클러스터는 최소 한 개의 워커 노드를 가진다. -워커 노드는 애플리케이션의 구성요소인 파드를 호스트한다. 컨트롤 플레인은 워커 노드와 클러스터 내 파드를 관리한다. 프로덕션 환경에서는 일반적으로 컨트롤 플레인이 여러 컴퓨터에 걸쳐 실행되고, 클러스터는 일반적으로 여러 노드를 실행하므로 내결함성과 고가용성이 제공된다. +워커 노드는 애플리케이션의 구성요소인 +{{< glossary_tooltip text="파드" term_id="pod" >}}를 호스트한다. +{{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}은 워커 노드와 +클러스터 내 파드를 관리한다. 프로덕션 환경에서는 일반적으로 컨트롤 플레인이 +여러 컴퓨터에 걸쳐 실행되고, 클러스터는 일반적으로 여러 노드를 +실행하므로 내결함성과 고가용성이 제공된다. diff --git a/content/ko/docs/reference/glossary/container-env-variables.md b/content/ko/docs/reference/glossary/container-env-variables.md index dc12e65839ba6..7df679f358bc4 100755 --- a/content/ko/docs/reference/glossary/container-env-variables.md +++ b/content/ko/docs/reference/glossary/container-env-variables.md @@ -10,7 +10,7 @@ aka: tags: - fundamental --- - 컨테이너 환경 변수는 파드에서 동작 중인 컨테이너에 유용한 정보를 제공하기 위한 이름=값 쌍이다. + 컨테이너 환경 변수는 {{< glossary_tooltip text="파드" term_id="pod" >}}에서 동작 중인 컨테이너에 유용한 정보를 제공하기 위한 이름=값 쌍이다. diff --git a/content/ko/docs/reference/glossary/cronjob.md b/content/ko/docs/reference/glossary/cronjob.md index 6b2aeb41549c3..452f95a4af710 100755 --- a/content/ko/docs/reference/glossary/cronjob.md +++ b/content/ko/docs/reference/glossary/cronjob.md @@ -4,14 +4,14 @@ id: cronjob date: 2018-04-12 full_link: /ko/docs/concepts/workloads/controllers/cron-jobs/ short_description: > - 주기적인 일정에 따라 실행되는 [잡](/docs/concepts/workloads/controllers/jobs-run-to-completion/)을 관리. + 주기적인 일정에 따라 실행되는 [잡](/ko/docs/concepts/workloads/controllers/jobs-run-to-completion/)을 관리. aka: tags: - core-object - workload --- - 주기적인 일정에 따라 실행되는 [잡](/docs/concepts/workloads/controllers/jobs-run-to-completion/)을 관리. + 주기적인 일정에 따라 실행되는 [잡](/ko/docs/concepts/workloads/controllers/jobs-run-to-completion/)을 관리. diff --git a/content/ko/docs/reference/glossary/deployment.md b/content/ko/docs/reference/glossary/deployment.md index 8c7192f1f7c23..e21c413d70625 100755 --- a/content/ko/docs/reference/glossary/deployment.md +++ b/content/ko/docs/reference/glossary/deployment.md @@ -16,5 +16,5 @@ tags: -각 레플리카는 {{< glossary_tooltip text="파드" term_id="pod" >}}로 표현되며, 파드는 클러스터의 노드에 분산된다. +각 레플리카는 {{< glossary_tooltip text="파드" term_id="pod" >}}로 표현되며, 파드는 클러스터의 {{< glossary_tooltip text="노드" term_id="node" >}}에 분산된다. diff --git a/content/ko/docs/reference/glossary/image.md b/content/ko/docs/reference/glossary/image.md index 2313d1740b790..d7583343b82f4 100755 --- a/content/ko/docs/reference/glossary/image.md +++ b/content/ko/docs/reference/glossary/image.md @@ -10,7 +10,7 @@ aka: tags: - fundamental --- - 컨테이너의 저장된 인스턴스이며, 애플리케이션 구동에 필요한 소프트웨어 집합을 가지고 있다. + {{< glossary_tooltip term_id="container" >}}의 저장된 인스턴스이며, 애플리케이션 구동에 필요한 소프트웨어 집합을 가지고 있다. diff --git a/content/ko/docs/reference/glossary/init-container.md b/content/ko/docs/reference/glossary/init-container.md index fdb4d2c82cb2d..1d870fcf65a11 100755 --- a/content/ko/docs/reference/glossary/init-container.md +++ b/content/ko/docs/reference/glossary/init-container.md @@ -10,9 +10,8 @@ aka: tags: - fundamental --- - 앱 컨테이너가 동작하기 전에 완료되기 위해 실행되는 하나 이상의 초기화 컨테이너. + 앱 컨테이너가 동작하기 전에 완료되기 위해 실행되는 하나 이상의 초기화 {{< glossary_tooltip text="컨테이너" term_id="container" >}}. 한 가지 차이점을 제외하면, 초기화 컨테이너는 일반적인 앱 컨테이너와 동일하다. 초기화 컨테이너는 앱 컨테이너가 시작되기 전에 완료되는 것을 목표로 실행되어야 한다. 초기화 컨테이너는 연달아 실행된다. 다시말해, 각 초기화 컨테이너의 실행은 다음 초기화 컨테이너가 시작되기 전에 완료되어야 한다. - diff --git a/content/ko/docs/reference/glossary/kube-proxy.md b/content/ko/docs/reference/glossary/kube-proxy.md index ea7eb79d580be..a4d53816024e7 100755 --- a/content/ko/docs/reference/glossary/kube-proxy.md +++ b/content/ko/docs/reference/glossary/kube-proxy.md @@ -11,10 +11,16 @@ tags: - fundamental - networking --- - [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/)는 클러스터의 각 노드에서 실행되는 네트워크 프록시로, 쿠버네티스의 {{< glossary_tooltip text="서비스" term_id="service">}} 개념의 구현부이다. + kube-proxy는 클러스터의 각 + {{< glossary_tooltip text="노드" term_id="node" >}}에서 + 실행되는 네트워크 프록시로, 쿠버네티스의 + {{< glossary_tooltip text="서비스" term_id="service">}} 개념의 구현부이다. -kube-proxy는 노드의 네트워크 규칙을 유지 관리한다. 이 네트워크 규칙이 내부 네트워크 세션이나 클러스터 바깥에서 파드로 네트워크 통신을 할 수 있도록 해준다. +[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/)는 +노드의 네트워크 규칙을 유지 관리한다. 이 네트워크 규칙이 내부 네트워크 +세션이나 클러스터 바깥에서 파드로 네트워크 통신을 +할 수 있도록 해준다. kube-proxy는 운영 체제에 가용한 패킷 필터링 계층이 있는 경우, 이를 사용한다. 그렇지 않으면, kube-proxy는 트래픽 자체를 포워드(forward)한다. diff --git a/content/ko/docs/reference/glossary/kube-scheduler.md b/content/ko/docs/reference/glossary/kube-scheduler.md index a560832ef5c58..38562f6087680 100644 --- a/content/ko/docs/reference/glossary/kube-scheduler.md +++ b/content/ko/docs/reference/glossary/kube-scheduler.md @@ -10,9 +10,15 @@ aka: tags: - architecture --- - 노드가 배정되지 않은 새로 생성된 파드를 감지하고, 실행할 노드를 선택하는 컨트롤 플레인 컴포넌트. + {{< glossary_tooltip term_id="node" text="노드" >}}가 배정되지 않은 새로 생성된 + {{< glossary_tooltip term_id="pod" text="파드" >}} 를 감지하고, + 실행할 노드를 선택하는 컨트롤 + 플레인 컴포넌트. -스케줄링 결정을 위해서 고려되는 요소는 리소스에 대한 개별 및 총체적 요구 사항, 하드웨어/소프트웨어/정책적 제약, 어피니티(affinity) 및 안티-어피니티(anti-affinity) 명세, 데이터 지역성, 워크로드-간 간섭, 데드라인을 포함한다. +스케줄링 결정을 위해서 고려되는 요소는 리소스에 대한 +개별 및 총체적 요구 사항, 하드웨어/소프트웨어/정책적 제약, +어피니티(affinity) 및 안티-어피니티(anti-affinity) 명세, +데이터 지역성, 워크로드-간 간섭, 데드라인을 포함한다. diff --git a/content/ko/docs/reference/glossary/kubelet.md b/content/ko/docs/reference/glossary/kubelet.md index 0b7852adb2400..671a50173bad3 100644 --- a/content/ko/docs/reference/glossary/kubelet.md +++ b/content/ko/docs/reference/glossary/kubelet.md @@ -11,7 +11,7 @@ tags: - fundamental - core-object --- - 클러스터의 각 노드에서 실행되는 에이전트. Kubelet은 파드에서 컨테이너가 확실하게 동작하도록 관리한다. + 클러스터의 각 {{< glossary_tooltip text="노드" term_id="node" >}}에서 실행되는 에이전트. Kubelet은 {{< glossary_tooltip text="파드" term_id="pod" >}}에서 {{< glossary_tooltip text="컨테이너" term_id="container" >}}가 확실하게 동작하도록 관리한다. diff --git a/content/ko/docs/reference/glossary/replication-controller.md b/content/ko/docs/reference/glossary/replication-controller.md index f5c19a33aafd6..30f2aabf9411c 100755 --- a/content/ko/docs/reference/glossary/replication-controller.md +++ b/content/ko/docs/reference/glossary/replication-controller.md @@ -11,7 +11,7 @@ tags: - workload - core-object --- - 특정 수의 파드 인스턴스가 항상 동작하도록 보장하는 쿠버네티스 서비스. + 특정 수의 {{< glossary_tooltip text="파드" term_id="pod" >}} 인스턴스가 항상 동작하도록 보장하는 쿠버네티스 서비스. diff --git a/content/ko/docs/reference/glossary/selector.md b/content/ko/docs/reference/glossary/selector.md index c5a4011b766fe..642fd31e22e99 100755 --- a/content/ko/docs/reference/glossary/selector.md +++ b/content/ko/docs/reference/glossary/selector.md @@ -10,9 +10,8 @@ aka: tags: - fundamental --- - 사용자가 레이블에 따라서 리소스 리스트를 필터할 수 있게 한다. + 사용자가 {{< glossary_tooltip text="레이블" term_id="label" >}}에 따라서 리소스 리스트를 필터할 수 있게 한다. -셀렉터는 리소스 리스트를 질의할 때 리스트를 {{< glossary_tooltip text="레이블" term_id="label" >}}에 따라서 필터하기 위해서 적용된다.Selectors are applied when querying lists of resources to filter them by {{< glossary_tooltip text="Labels" term_id="label" >}}. - +셀렉터는 리소스 리스트를 질의할 때 리스트를 레이블에 따라서 필터하기 위해서 적용된다. diff --git a/content/ko/docs/reference/glossary/taint.md b/content/ko/docs/reference/glossary/taint.md index 66ec24d28de29..4e50003e92ba4 100644 --- a/content/ko/docs/reference/glossary/taint.md +++ b/content/ko/docs/reference/glossary/taint.md @@ -11,8 +11,8 @@ tags: - core-object - fundamental --- - 세 가지 필수 속성: 키(key), 값(value), 효과(effect)로 구성된 코어 오브젝트. 테인트는 파드가 노드나 노드 그룹에 스케줄링되는 것을 방지한다. + 세 가지 필수 속성: 키(key), 값(value), 효과(effect)로 구성된 코어 오브젝트. 테인트는 {{< glossary_tooltip text="파드" term_id="pod" >}}가 {{< glossary_tooltip text="노드" term_id="node" >}}나 노드 그룹에 스케줄링되는 것을 방지한다. -테인트 및 {{< glossary_tooltip text="톨러레이션(toleration)" term_id="toleration" >}}은 함께 작동하며, 파드가 적절하지 못한 노드에 스케줄되는 것을 방지한다. 하나 이상의 테인트가 {{< glossary_tooltip text="노드" term_id="node" >}}에 적용될 수 있으며, 이것은 노드에 해당 테인트를 극복(tolerate)하지 않은 파드를 허용하지 않도록 표시한다. +테인트 및 {{< glossary_tooltip text="톨러레이션(toleration)" term_id="toleration" >}}은 함께 작동하며, 파드가 적절하지 못한 노드에 스케줄되는 것을 방지한다. 하나 이상의 테인트가 노드에 적용될 수 있으며, 이것은 노드에 해당 테인트를 극복(tolerate)하지 않은 파드를 허용하지 않도록 표시한다. diff --git a/content/ko/docs/reference/glossary/volume.md b/content/ko/docs/reference/glossary/volume.md index 8e2841e72cfcd..24e37a3165617 100755 --- a/content/ko/docs/reference/glossary/volume.md +++ b/content/ko/docs/reference/glossary/volume.md @@ -11,9 +11,11 @@ tags: - core-object - fundamental --- - 데이터를 포함하고 있는 디렉토리이며, {{< glossary_tooltip text="파드" term_id="pod" >}}의 컨테이너에서 접근 가능하다. + 데이터를 포함하고 있는 디렉토리이며, {{< glossary_tooltip term_id="pod" >}}의 {{< glossary_tooltip text="컨테이너" term_id="container" >}}에서 접근 가능하다. -쿠버네티스 볼륨은 그것을 포함하고 있는 {{< glossary_tooltip text="파드" term_id="pod" >}}만큼 오래 산다. 결과적으로, 볼륨은 {{< glossary_tooltip text="파드" term_id="pod" >}} 안에서 실행되는 모든 {{< glossary_tooltip text="컨테이너" term_id="container" >}} 보다 오래 지속되며, 데이터는 {{< glossary_tooltip text="컨테이너" term_id="container" >}}의 재시작 간에도 보존된다. +쿠버네티스 볼륨은 그것을 포함하고 있는 파드만큼 오래 산다. 결과적으로, 볼륨은 파드 안에서 실행되는 모든 컨테이너 보다 오래 지속되며, 데이터는 컨테이너의 재시작 간에도 보존된다. + +더 많은 정보는 [스토리지](https://kubernetes.io/ko/docs/concepts/storage/)를 본다. diff --git a/content/ko/docs/reference/kubectl/cheatsheet.md b/content/ko/docs/reference/kubectl/cheatsheet.md index ccac6c32d1589..83d8e591ab8a1 100644 --- a/content/ko/docs/reference/kubectl/cheatsheet.md +++ b/content/ko/docs/reference/kubectl/cheatsheet.md @@ -38,13 +38,13 @@ complete -F __start_kubectl k ```bash source <(kubectl completion zsh) # 현재 셸에 zsh의 자동 완성 설정 -echo "if [ $commands[kubectl] ]; then source <(kubectl completion zsh); fi" >> ~/.zshrc # 자동 완성을 zsh 셸에 영구적으로 추가한다. +echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc # 자동 완성을 zsh 셸에 영구적으로 추가한다. ``` ## Kubectl 컨텍스트와 설정 `kubectl`이 통신하고 설정 정보를 수정하는 쿠버네티스 클러스터를 -지정한다. 설정 파일에 대한 자세한 정보는 [kubeconfig를 이용한 클러스터 간 인증](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) 문서를 +지정한다. 설정 파일에 대한 자세한 정보는 [kubeconfig를 이용한 클러스터 간 인증](/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) 문서를 참고한다. ```bash @@ -91,7 +91,7 @@ kubectl apply -f ./my1.yaml -f ./my2.yaml # 여러 파일로 부터 생성 kubectl apply -f ./dir # dir 내 모든 매니페스트 파일에서 리소스(들) 생성 kubectl apply -f https://git.io/vPieo # url로부터 리소스(들) 생성 kubectl create deployment nginx --image=nginx # nginx 단일 인스턴스를 시작 -kubectl explain pods,svc # 파드와 서비스 매니페스트 문서를 조회 +kubectl explain pods # 파드 매니페스트 문서를 조회 # stdin으로 다수의 YAML 오브젝트 생성 cat <}} @@ -100,12 +102,13 @@ API 버전의 차이는 수준의 안정성과 지원의 차이를 나타낸다. `--runtime-config` 변경을 반영해야 한다. {{< /note >}} -## 그룹 내 리소스 활성화 시키기 +## extensions/v1beta1 그룹 내 특정 리소스 활성화 하기 + +데몬셋, 디플로이먼트, 스테이트풀셋, 네트워크정책, 파드보안정책 그리고 레플리카셋은 `extensions/v1beta1` API 그룹에서 기본적으로 비활성화되어있다. +예시: 디플로이먼트와 데몬셋의 활성화 설정은 +`--runtime-config=extensions/v1beta1/deployments=true,extensions/v1beta1/daemonsets=true` 를 입력한다. -데몬셋, 디플로이먼트, HorizontalPodAutoscaler, 인그레스, 잡 및 레플리카셋이 기본적으로 활성화되어 있다. -다른 확장 리소스는 apiserver의 `--runtime-config`를 설정해서 -활성화할 수 있다. `--runtime-config`는 쉼표로 분리된 값을 허용한다. 예를 들어 디플로이먼트와 잡을 비활성화하려면, -`--runtime-config=extensions/v1beta1/deployments=false,extensions/v1beta1/ingresses=false`와 같이 설정한다. +{{< note >}}개별 리소스의 활성화/비활성화는 레거시 문제로 `extensions/v1beta1` API 그룹에서만 지원된다. {{< /note >}} {{% /capture %}} diff --git a/content/ko/docs/reference/using-api/client-libraries.md b/content/ko/docs/reference/using-api/client-libraries.md index 3cc93f7c7f41b..82ebdccb29bd4 100644 --- a/content/ko/docs/reference/using-api/client-libraries.md +++ b/content/ko/docs/reference/using-api/client-libraries.md @@ -58,6 +58,7 @@ Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery | PHP | [github.com/allansun/kubernetes-php-client](https://github.com/allansun/kubernetes-php-client) | | PHP | [github.com/travisghansen/kubernetes-client-php](https://github.com/travisghansen/kubernetes-client-php) | | Python | [github.com/eldarion-gondor/pykube](https://github.com/eldarion-gondor/pykube) | +| Python | [github.com/fiaas/k8s](https://github.com/fiaas/k8s) | | Python | [github.com/mnubo/kubernetes-py](https://github.com/mnubo/kubernetes-py) | | Python | [github.com/tomplus/kubernetes_asyncio](https://github.com/tomplus/kubernetes_asyncio) | | Ruby | [github.com/Ch00k/kuber](https://github.com/Ch00k/kuber) | diff --git a/content/ko/docs/setup/_index.md b/content/ko/docs/setup/_index.md index 0ece7e3661461..fe3e81d84f12b 100644 --- a/content/ko/docs/setup/_index.md +++ b/content/ko/docs/setup/_index.md @@ -72,7 +72,7 @@ card: | [CloudStack](https://cloudstack.apache.org/) | | | | | ✔| | [Canonical](https://ubuntu.com/kubernetes) | ✔ | ✔ | ✔ | ✔ |✔ | ✔ | [Containership](https://containership.io) | ✔ |✔ | | | | -| [D2iQ](https://d2iq.com/) | | [Kommander](https://d2iq.com/solutions/ksphere) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | +| [D2iQ](https://d2iq.com/) | | [Kommander](https://docs.d2iq.com/ksphere/kommander/) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | | [Digital Rebar](https://provision.readthedocs.io/en/tip/README.html) | | | | | | ✔ | [DigitalOcean](https://www.digitalocean.com/products/kubernetes/) | ✔ | | | | | | [Docker Enterprise](https://www.docker.com/products/docker-enterprise) | |✔ | ✔ | | | ✔ @@ -84,7 +84,7 @@ card: | [Ionos](https://www.ionos.com/enterprise-cloud) | [Ionos Managed Kubernetes](https://www.ionos.com/enterprise-cloud/managed-kubernetes) | [Ionos Enterprise Cloud](https://www.ionos.com/enterprise-cloud) | | | [Kontena Pharos](https://www.kontena.io/pharos/) | |✔| ✔ | | | | [KubeOne](https://kubeone.io/) | | ✔ | ✔ | ✔ | ✔ | ✔ | -| [Kubermatic](https://kubermatic.io/) | ✔ | ✔ | ✔ | ✔ | ✔ | | +| [Kubermatic](https://kubermatic.io/) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | | [KubeSail](https://kubesail.com/) | ✔ | | | | | | [Kubespray](https://kubespray.io/#/) | | | |✔ | ✔ | ✔ | | [Kublr](https://kublr.com/) |✔ | ✔ |✔ |✔ |✔ |✔ | diff --git a/content/ko/docs/setup/best-practices/multiple-zones.md b/content/ko/docs/setup/best-practices/multiple-zones.md index d9ef519878203..f81c44de8b6b2 100644 --- a/content/ko/docs/setup/best-practices/multiple-zones.md +++ b/content/ko/docs/setup/best-practices/multiple-zones.md @@ -184,7 +184,7 @@ kubernetes-minion-wf8i Ready 2m v1.13.0 Create a volume using the dynamic volume creation (only PersistentVolumes are supported for zone affinity): -```json +```bash kubectl apply -f - <}} 컨테이너를 실행할 때 runc가 시스템 파일 디스크립터를 처리하는 방식에서 결함이 발견되었다. -악성 컨테이너는 이 결함을 사용하여 runc 바이너리의 내용을 덮어쓸 수 있으며 +악성 컨테이너는 이 결함을 사용하여 runc 바이너리의 내용을 덮어쓸 수 있으며 따라서 컨테이너 호스트 시스템에서 임의의 명령을 실행할 수 있다. 이 문제에 대한 자세한 내용은 @@ -34,18 +34,18 @@ weight: 10 ### Cgroup 드라이버 -Linux 배포판의 init 시스템이 systemd인 경우, init 프로세스는 -root control group(`cgroup`)을 생성 및 사용하는 cgroup 관리자로 작동한다. -Systemd는 cgroup과의 긴밀한 통합을 통해 프로세스당 cgroup을 할당한다. -컨테이너 런타임과 kubelet이 `cgroupfs`를 사용하도록 설정할 수 있다. +Linux 배포판의 init 시스템이 systemd인 경우, init 프로세스는 +root control group(`cgroup`)을 생성 및 사용하는 cgroup 관리자로 작동한다. +Systemd는 cgroup과의 긴밀한 통합을 통해 프로세스당 cgroup을 할당한다. +컨테이너 런타임과 kubelet이 `cgroupfs`를 사용하도록 설정할 수 있다. systemd와 함께`cgroupfs`를 사용하면 두 개의 서로 다른 cgroup 관리자가 존재하게 된다는 뜻이다. -Control group은 프로세스에 할당된 리소스를 제한하는데 사용된다. -단일 cgroup 관리자는 할당된 리소스가 무엇인지를 단순화하고, -기본적으로 사용가능한 리소스와 사용중인 리소스를 일관성있게 볼 수 있다. -관리자가 두 개인 경우, 이런 리소스도 두 개의 관점에서 보게 된다. kubelet과 Docker는 -`cgroupfs`를 사용하고 나머지 프로세스는 -`systemd`를 사용하도록 노드가 설정된 경우, +Control group은 프로세스에 할당된 리소스를 제한하는데 사용된다. +단일 cgroup 관리자는 할당된 리소스가 무엇인지를 단순화하고, +기본적으로 사용가능한 리소스와 사용중인 리소스를 일관성있게 볼 수 있다. +관리자가 두 개인 경우, 이런 리소스도 두 개의 관점에서 보게 된다. kubelet과 Docker는 +`cgroupfs`를 사용하고 나머지 프로세스는 +`systemd`를 사용하도록 노드가 설정된 경우, 리소스가 부족할 때 불안정해지는 사례를 본 적이 있다. 컨테이너 런타임과 kubelet이 `systemd`를 cgroup 드라이버로 사용하도록 설정을 변경하면 @@ -53,7 +53,7 @@ Control group은 프로세스에 할당된 리소스를 제한하는데 사용 {{< caution >}} 클러스터에 결합되어 있는 노드의 cgroup 관리자를 변경하는 것은 권장하지 않는다. -하나의 cgroup 드라이버의 의미를 사용하여 kubelet이 파드를 생성해왔다면, +하나의 cgroup 드라이버의 의미를 사용하여 kubelet이 파드를 생성해왔다면, 컨테이너 런타임을 다른 cgroup 드라이버로 변경하는 것은 존재하는 기존 파드에 대해 PodSandBox를 재생성을 시도할 때, 에러가 발생할 수 있다. kubelet을 재시작 하는 것은 에러를 해결할 수 없을 것이다. 추천하는 방법은 워크로드에서 노드를 제거하고, 클러스터에서 제거한 다음 다시 결합시키는 것이다. @@ -62,7 +62,7 @@ kubelet을 재시작 하는 것은 에러를 해결할 수 없을 것이다. ## Docker 각 머신들에 대해서, Docker를 설치한다. -버전 19.03.4가 추천된다. 그러나 1.13.1, 17.03, 17.06, 17.09, 18.06 그리고 18.09도 동작하는 것으로 알려져 있다. +버전 19.03.4가 추천된다. 그러나 1.13.1, 17.03, 17.06, 17.09, 18.06 그리고 18.09도 동작하는 것으로 알려져 있다. 쿠버네티스 릴리스 노트를 통해서, 최신에 검증된 Docker 버전의 지속적인 파악이 필요하다. 시스템에 Docker를 설치하기 위해서 아래의 커맨드들을 사용한다. @@ -218,7 +218,7 @@ systemctl start crio ## Containerd -이 섹션은 `containerd`를 CRI 런타임으로써 사용하는데 필요한 단계를 담고 있다. +이 섹션은 `containerd`를 CRI 런타임으로써 사용하는데 필요한 단계를 담고 있다. Containerd를 시스템에 설치하기 위해서 다음의 커맨드들을 사용한다. @@ -304,4 +304,4 @@ kubeadm을 사용하는 경우에도 마찬가지로, 수동으로 자세한 정보는 [Frakti 빠른 시작 가이드](https://github.com/kubernetes/frakti#quickstart)를 참고한다. -{{% /capture %}} +{{% /capture %}} \ No newline at end of file diff --git a/content/ko/docs/setup/production-environment/tools/kops.md b/content/ko/docs/setup/production-environment/tools/kops.md index f1c1ed4d2ddb7..b9162e18edaeb 100644 --- a/content/ko/docs/setup/production-environment/tools/kops.md +++ b/content/ko/docs/setup/production-environment/tools/kops.md @@ -50,14 +50,12 @@ curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https:// | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64 ``` -특정 버전을 다운로드 받는다면 다음을 변경한다. +특정 버전을 다운로드 받는다면 명령의 다음부분을 특정 kops 버전으로 변경한다. ```shell $(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4) ``` -특정 버전의 명령 부분이다. - 예를 들어 kops 버전을 v1.15.0을 다운로드 하려면 다음을 입력한다. ```shell diff --git a/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md index f5d624e07ea7f..c12f9e68a3902 100644 --- a/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md +++ b/content/ko/docs/setup/production-environment/windows/user-guide-windows-containers.md @@ -104,6 +104,14 @@ spec: 윈도우 컨테이너 호스트는 현재 윈도우 네트워킹 스택의 플랫폼 제한으로 인해, 그 안에서 스케줄링하는 서비스의 IP 주소로 접근할 수 없다. 윈도우 파드만 서비스 IP 주소로 접근할 수 있다. {{< /note >}} +## 가시성 + +### 워크로드에서 로그 캡쳐하기 + +로그는 가시성의 중요한 요소이다. 로그는 사용자가 워크로드의 운영측면을 파악할 수 있도록 하며 문제 해결의 핵심 요소이다. 윈도우 컨테이너와 워크로드 내의 윈도우 컨테이너가 리눅스 컨테이너와는 다르게 동작하기 때문에, 사용자가 로그를 수집하는 데 어려움을 겪었기에 운영 가시성이 제한되었다. 예를 들어 윈도우 워크로드는 일반적으로 ETW(Event Tracing for Windows)에 로그인하거나 애플리케이션 이벤트 로그에 항목을 푸시하도록 구성한다. Microsoft의 오픈 소스 도구인 [LogMonitor](https://github.com/microsoft/windows-container-tools/tree/master/LogMonitor)는 윈도우 컨테이너 안에 구성된 로그 소스를 모니터링하는 권장하는 방법이다. LogMonitor는 이벤트 로그, ETW 공급자 그리고 사용자 정의 애플리케이션 로그 모니터링을 지원하고 `kubectl logs ` 에 의한 사용을 위해 STDOUT으로 파이프한다. + +LogMonitor Github 페이지의 지침에 따라 모든 컨테이너 바이너리와 설정 파일을 복사하고, LogMonitor에 필요한 입력 지점을 추가해서 로그를 STDOUT으로 푸시한다. + ## 설정 가능한 컨테이너 username 사용하기 쿠버네티스 v1.16 부터, 윈도우 컨테이너는 이미지 기본 값과는 다른 username으로 엔트리포인트와 프로세스를 실행하도록 설정할 수 있다. 이 방식은 리눅스 컨테이너에서 지원되는 방식과는 조금 차이가 있다. [여기](/docs/tasks/configure-pod-container/configure-runasusername/)에서 이에 대해 추가적으로 배울 수 있다. diff --git a/content/ko/docs/setup/production-environment/windows/user-guide-windows-nodes.md b/content/ko/docs/setup/production-environment/windows/user-guide-windows-nodes.md index 4248df3b16f5c..e6720c9b0b5cb 100644 --- a/content/ko/docs/setup/production-environment/windows/user-guide-windows-nodes.md +++ b/content/ko/docs/setup/production-environment/windows/user-guide-windows-nodes.md @@ -291,7 +291,7 @@ v1.14 이후의 최신 바이너리를 [https://github.com/kubernetes/kubernetes 본 단계는 다음의 행위를 수행한다. -1. 컨트롤 플레인("마스터") 노드에 SSH로 접속해서 [Kubeconfig 파일](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)을 얻어온다. +1. 컨트롤 플레인("마스터") 노드에 SSH로 접속해서 [Kubeconfig 파일](/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig/)을 얻어온다. 1. kubelet을 윈도우 서비스로 등록한다. 1. CNI 네트워크 플러그인을 구성한다. 1. 선택된 네트워크 인터페이스 상에서 HNS 네트워크를 생성한다. @@ -328,7 +328,7 @@ kubectl get nodes 1. 등록된 모든 쿠버네티스 서비스(flanneld, kubelet, kube-proxy)를 해지한다. 1. 쿠버네티스 바이너리(kube-proxy.exe, kubelet.exe, flanneld.exe, kubeadm.exe)를 모두 삭제한다. 1. CNI 네트워크 플러그인 바이너리를 모두 삭제한다. -1. 쿠버네티스 클러스터에 접근하기 위한 [Kubeconfig 파일](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)을 삭제한다. +1. 쿠버네티스 클러스터에 접근하기 위한 [Kubeconfig 파일](/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig/)을 삭제한다. ### 퍼블릭 클라우드 제공자 diff --git a/content/ko/docs/tasks/access-application-cluster/access-cluster.md b/content/ko/docs/tasks/access-application-cluster/access-cluster.md index 75929ae973809..bdb42ba3a2fcb 100644 --- a/content/ko/docs/tasks/access-application-cluster/access-cluster.md +++ b/content/ko/docs/tasks/access-application-cluster/access-cluster.md @@ -179,7 +179,7 @@ Python 클라이언트는 apiserver의 위치지정과 인증에 kubectl CLI와 ### 다른 언어 -다른 언어에서 API를 접속하기 위한 [클라이언트 라이브러리들](/docs/reference/using-api/client-libraries/)도 존재한다. +다른 언어에서 API를 접속하기 위한 [클라이언트 라이브러리들](/ko/docs/reference/using-api/client-libraries/)도 존재한다. 이들이 어떻게 인증하는지는 다른 라이브러리들의 문서를 참조한다. ## 파드에서 API 액세스 diff --git a/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md index 0ab00f5d1c25b..4ca2cc1fafb67 100644 --- a/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md +++ b/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md @@ -322,7 +322,7 @@ contexts: ``` kubeconfig 파일들을 어떻게 병합하는지에 대한 상세정보는 -[kubeconfig 파일을 사용하여 클러스터 접근 구성하기](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)를 참조한다. +[kubeconfig 파일을 사용하여 클러스터 접근 구성하기](/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig/)를 참조한다. ## $HOME/.kube 디렉토리 탐색 @@ -372,7 +372,7 @@ Windows PowerShell {{% capture whatsnext %}} -* [kubeconfig 파일을 사용하여 클러스터 접근 구성하기](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) +* [kubeconfig 파일을 사용하여 클러스터 접근 구성하기](/ko/docs/concepts/configuration/organize-cluster-access-kubeconfig/) * [kubectl config](/docs/reference/generated/kubectl/kubectl-commands#config) {{% /capture %}} diff --git a/content/ko/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md b/content/ko/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md index 3d9a5d1d99195..318944e6b4c73 100644 --- a/content/ko/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md +++ b/content/ko/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md @@ -2,6 +2,7 @@ title: 포트 포워딩을 사용해서 클러스터 내 애플리케이션에 접근하기 content_template: templates/task weight: 40 +min-kubernetes-server-version: v1.10 --- {{% capture overview %}} @@ -26,104 +27,156 @@ weight: 40 ## Redis 디플로이먼트와 서비스 생성하기 -1. Redis 디플로이먼트를 생성한다. - - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml +1. Redis를 실행하기 위해 디플로이먼트를 생성한다. + + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml + ``` 성공적인 명령어의 출력은 디플로이먼트가 생성됐다는 것을 확인해준다. - deployment.apps/redis-master created + ``` + deployment.apps/redis-master created + ``` 파드 상태를 조회하여 파드가 준비되었는지 확인한다. - kubectl get pods + ```shell + kubectl get pods + ``` 출력은 파드가 생성되었다는 것을 보여준다. - NAME READY STATUS RESTARTS AGE - redis-master-765d459796-258hz 1/1 Running 0 50s + ``` + NAME READY STATUS RESTARTS AGE + redis-master-765d459796-258hz 1/1 Running 0 50s + ``` 디플로이먼트 상태를 조회한다. - kubectl get deployment + ```shell + kubectl get deployment + ``` 출력은 디플로이먼트가 생성되었다는 것을 보여준다. - NAME READY UP-TO-DATE AVAILABLE AGE - redis-master 1/1 1 1 55s + ``` + NAME READY UP-TO-DATE AVAILABLE AGE + redis-master 1/1 1 1 55s + ``` 아래의 명령어를 사용하여 레플리카셋 상태를 조회한다. - kubectl get rs + ```shell + kubectl get rs + ``` 출력은 레플리카셋이 생성되었다는 것을 보여준다. - NAME DESIRED CURRENT READY AGE - redis-master-765d459796 1 1 1 1m + ``` + NAME DESIRED CURRENT READY AGE + redis-master-765d459796 1 1 1 1m + ``` -2. Redis 서비스를 생성한다. +2. Redis를 네트워크에 노출시키기 위해 서비스를 생성한다. - kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml + ```shell + kubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-service.yaml + ``` 성공적인 커맨드의 출력은 서비스가 생성되었다는 것을 확인해준다. - service/redis-master created + ``` + service/redis-master created + ``` 서비스가 생성되었는지 확인한다. - kubectl get svc | grep redis + ```shell + kubectl get svc | grep redis + ``` 출력은 서비스가 생성되었다는 것을 보여준다. - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - redis-master ClusterIP 10.0.0.213 6379/TCP 27s + ``` + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + redis-master ClusterIP 10.0.0.213 6379/TCP 27s + ``` 3. Redis 서버가 파드 안에서 실행되고 있고, 6379번 포트에서 수신하고 있는지 확인한다. - kubectl get pods redis-master-765d459796-258hz --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' + ```shell + # redis-master-765d459796-258hz 를 파드 이름으로 변경한다. + kubectl get pod redis-master-765d459796-258hz --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' + + ``` - 출력은 포트 번호를 보여준다. + 출력은 파드 내 Redis 포트 번호를 보여준다. - 6379 + ``` + 6379 + ``` + (이 TCP 포트는 Redis가 인터넷에 할당된 것이다). ## 파드의 포트를 로컬 포트로 포워딩하기 -1. 쿠버네티스 1.10 버전부터, `kubectl port-forward` 명령어는 파드 이름과 같이 리소스 이름을 사용하여 일치하는 파드를 선택해 포트 포워딩하는 것을 허용한다. +1. `kubectl port-forward` 명령어는 파드 이름과 같이 리소스 이름을 사용하여 일치하는 파드를 선택해 포트 포워딩하는 것을 허용한다. - kubectl port-forward redis-master-765d459796-258hz 7000:6379 + ```shell + # redis-master-765d459796-258hz 를 파드 이름으로 변경한다. + kubectl port-forward redis-master-765d459796-258hz 7000:6379 + ``` 이것은 - kubectl port-forward pods/redis-master-765d459796-258hz 7000:6379 + ```shell + kubectl port-forward pods/redis-master-765d459796-258hz 7000:6379 + ``` 또는 - kubectl port-forward deployment/redis-master 7000:6379 + ```shell + kubectl port-forward deployment/redis-master 7000:6379 + ``` 또는 - kubectl port-forward rs/redis-master 7000:6379 + ```shell + kubectl port-forward rs/redis-master 7000:6379 + ``` 또는 다음과 같다. - kubectl port-forward svc/redis-master 7000:6379 + ```shell + kubectl port-forward svc/redis-master 7000:6379 + ``` 위의 명령어들은 모두 동일하게 동작한다. 이와 유사하게 출력된다. - I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:7000 -> 6379 - I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:7000 -> 6379 + ``` + I0710 14:43:38.274550 3655 portforward.go:225] Forwarding from 127.0.0.1:7000 -> 6379 + I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:7000 -> 6379 + ``` 2. Redis 커맨드라인 인터페이스를 실행한다. - redis-cli -p 7000 + ```shell + redis-cli -p 7000 + ``` 3. Redis 커맨드라인 프롬프트에 `ping` 명령을 입력한다. - 127.0.0.1:7000>ping + ```shell + ping + ``` + + 성공적인 핑 요청을 반환한다. - 성공적인 핑 요청은 PONG을 반환한다. + ``` + PONG + ``` {{% /capture %}} @@ -136,11 +189,12 @@ weight: 40 이 연결로 로컬 워크스테이션에서 파드 안에서 실행 중인 데이터베이스를 디버깅하는데 사용할 수 있다. -{{< warning >}} -알려진 제한사항으로 인해, 오늘날 포트 포워딩은 TCP 프로토콜에서만 작동한다. UDP 프로토콜에 대한 지원은 +{{< note >}} +`kubectl port-forward` 는 TCP 포트에서만 구현된다. +UDP 프로토콜에 대한 지원은 [이슈 47862](https://github.com/kubernetes/kubernetes/issues/47862) 에서 추적되고 있다. -{{< /warning >}} +{{< /note >}} {{% /capture %}} diff --git a/content/ko/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/ko/docs/tasks/access-application-cluster/web-ui-dashboard.md index f9d87d90953fd..0f55dd30c25ae 100644 --- a/content/ko/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/content/ko/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -109,7 +109,7 @@ track=stable - **이미지 풀(Pull) 시크릿**: 특정 도커 컨테이너 이미지가 프라이빗한 경우, [풀(Pull) 시크릿](/docs/concepts/configuration/secret/) 증명을 요구한다. - 대시보드는 가능한 모든 시크릿을 드롭다운 리스트로 제공하며, 새로운 시크릿을 생성 할 수 있도록 한다. 시크릿 이름은 예를 들어 `new.image-pull.secret` 과 같이 DNS 도메인 이름 구문으로 따르기로 한다. 시크릿 내용은 base64 인코딩 방식이며, [`.dockercfg`](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) 파일로 정의된다. 시크릿 이름은 최대 253 문자를 포함할 수 있다. + 대시보드는 가능한 모든 시크릿을 드롭다운 리스트로 제공하며, 새로운 시크릿을 생성 할 수 있도록 한다. 시크릿 이름은 예를 들어 `new.image-pull.secret` 과 같이 DNS 도메인 이름 구문으로 따르기로 한다. 시크릿 내용은 base64 인코딩 방식이며, [`.dockercfg`](/ko/docs/concepts/containers/images/#파드에-imagepullsecrets-명시) 파일로 정의된다. 시크릿 이름은 최대 253 문자를 포함할 수 있다. 이미지 풀(Pull) 시크릿의 생성이 성공한 경우, 기본으로 선택된다. 만약 생성에 실패하면, 시크릿은 허용되지 않는다. diff --git a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 5672b7809b0fa..6b421f5aed9e7 100644 --- a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -81,7 +81,7 @@ service/php-apache created 간단히 얘기하면, HPA는 (디플로이먼트를 통한) 평균 CPU 사용량을 50%로 유지하기 위하여 레플리카의 개수를 늘리고 줄인다. (kubectl run으로 각 파드는 200 밀리코어까지 요청할 수 있고, 따라서 여기서 말하는 평균 CPU 사용은 100 밀리코어를 말한다). -이에 대한 자세한 알고리즘은 [여기](/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details)를 참고하기 바란다. +이에 대한 자세한 알고리즘은 [여기](/ko/docs/tasks/run-application/horizontal-pod-autoscale/#알고리즘-세부-정보)를 참고하기 바란다. ```shell kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 @@ -367,8 +367,8 @@ status: type: Object object: metric: - name: `http_requests` - selector: `verb=GET` + name: http_requests + selector: {matchLabels: {verb: GET}} ``` 이 셀렉터는 쿠버네티스의 레이블 셀렉터와 동일한 문법이다. diff --git a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md index 4043ef3a1393a..136ff556e917d 100644 --- a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -67,7 +67,7 @@ Horizontal Pod Autoscaler는 컨트롤러 HorizontalPodAutoscaler는 보통 일련의 API 집합(`metrics.k8s.io`, `custom.metrics.k8s.io`, `external.metrics.k8s.io`)에서 메트릭을 가져온다. `metrics.k8s.io` API는 대개 별도로 시작해야 하는 메트릭-서버에 의해 제공된다. 가이드는 -[메트릭-서버](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server)를 +[메트릭-서버](/ko/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#메트릭-서버)를 참조한다. HorizontalPodAutoscaler는 힙스터(Heapster)에서 직접 메트릭을 가져올 수도 있다. {{< note >}} @@ -158,11 +158,15 @@ HorizontalPodAutoscaler에 여러 메트릭이 지정된 경우, 이 계산은 현재 값보다 높은 `desiredReplicas` 을 제공하는 경우 HPA가 여전히 확장할 수 있음을 의미한다. -마지막으로, HPA가 목표를 스케일하기 직전에 스케일 권장 사항이 기록된다. -컨트롤러는 구성 가능한 창(window) 내에서 가장 높은 권장 사항을 선택하도록 해당 창 내의 -모든 권장 사항을 고려한다. 이 값은 `--horizontal-pod-autoscaler-downscale-stabilization` 플래그를 사용하여 설정할 수 있고, 기본 값은 5분이다. -즉, 스케일 다운이 점진적으로 발생하여 급격히 변동하는 -메트릭 값의 영향을 완만하게 한다. +마지막으로, HPA가 목표를 스케일하기 직전에 스케일 권장 사항이 +기록된다. 컨트롤러는 구성 가능한 창(window) 내에서 가장 높은 권장 +사항을 선택하도록 해당 창 내의 모든 권장 사항을 고려한다. 이 값은 +`--horizontal-pod-autoscaler-downscale-stabilization` 플래그 또는 HPA 오브젝트 +동작 `behavior.scaleDown.stabilizationWindowSeconds` ([구성가능한 +스케일링 동작 지원](#구성가능한-스케일링-동작-지원)을 본다)을 +사용하여 설정할 수 있고, 기본 값은 5분이다. +즉, 스케일 다운이 점진적으로 발생하여 급격히 변동하는 메트릭 값의 +영향을 완만하게 한다. ## API 오브젝트 @@ -174,6 +178,8 @@ CPU에 대한 오토스케일링 지원만 포함하는 안정된 버전은 `autoscaling/v2beta2`에서 확인할 수 있다. `autoscaling/v2beta2`에서 소개된 새로운 필드는 `autoscaling/v1`로 작업할 때 어노테이션으로 보존된다. +HorizontalPodAutoscaler API 오브젝트 생성시 지정된 이름이 유효한 +[DNS 서브도메인 이름](/ko/docs/concepts/overview/working-with-objects/names/#dns-서브도메인-이름들)인지 확인해야 한다. API 오브젝트에 대한 자세한 내용은 [HorizontalPodAutoscaler 오브젝트](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object)에서 찾을 수 있다. @@ -228,6 +234,11 @@ v1.12부터는 새로운 알고리즘 업데이트가 업스케일 지연에 대 있다. {{< /note >}} +v1.17 부터 v2beta2 API 필드에서 `behavior.scaleDown.stabilizationWindowSeconds` +를 설정하여 다운스케일 안정화 창을 HPA별로 설정할 수 있다. +[구성가능한 스케일링 +동작 지원](#구성가능한-스케일링-동작-지원)을 본다. + ## 멀티 메트릭을 위한 지원 Kubernetes 1.6은 멀티 메트릭을 기반으로 스케일링을 지원한다. `autoscaling/v2beta2` API @@ -275,8 +286,156 @@ API에 접속하려면 클러스터 관리자는 다음을 확인해야 한다. [custom.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md), [external.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md)를 참조한다. -어떻게 사용하는지에 대한 예시는 [커스텀 메트릭 사용하는 작업 과정](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics)과 -[외부 메트릭스 사용하는 작업 과정](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects)을 참조한다. +어떻게 사용하는지에 대한 예시는 [커스텀 메트릭 사용하는 작업 과정](/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#다양한-메트릭-및-사용자-정의-메트릭을-기초로한-오토스케일링)과 +[외부 메트릭스 사용하는 작업 과정](/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#쿠버네티스-오브젝트와-관련이-없는-메트릭을-기초로한-오토스케일링)을 참조한다. + +## 구성가능한 스케일링 동작 지원 + +[v1.17](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md) +부터 `v2beta2` API는 HPA `behavior` 필드를 통해 +스케일링 동작을 구성할 수 있다. +동작은 `behavior` 필드 아래의 `scaleUp` 또는 `scaleDown` +섹션에서 스케일링 업과 다운을 위해 별도로 지정된다. 안정화 윈도우는 +스케일링 대상에서 레플리카 수의 플래핑(flapping)을 방지하는 +양방향에 대해 지정할 수 있다. 마찬가지로 스케일링 정책을 지정하면 +스케일링 중 레플리카 변경 속도를 제어할 수 있다. + +### 스케일링 정책 + +스펙의 `behavior` 섹션에 하나 이상의 스케일링 폴리시를 지정할 수 있다. +폴리시가 여러 개 지정된 경우 가장 많은 양의 변경을 +허용하는 정책이 기본적으로 선택된 폴리시이다. 다음 예시는 스케일 다운 중 이 +동작을 보여준다. + +```yaml +behavior: + scaleDown: + policies: + - type: Pods + value: 4 + periodSeconds: 60 + - type: Percent + value: 10 + periodSeconds: 60 +``` + +파드 수가 40개를 초과하면 두 번째 폴리시가 스케일링 다운에 사용된다. +예를 들어 80개의 레플리카가 있고 대상을 10개의 레플리카로 축소해야 하는 +경우 첫 번째 단계에서 8개의 레플리카가 스케일 다운 된다. 레플리카의 수가 72개일 때 +다음 반복에서 파드의 10%는 7.2 이지만, 숫자는 8로 올림된다. 오토스케일러 컨트롤러의 +각 루프에서 변경될 파드의 수는 현재 레플리카의 수에 따라 재계산된다. 레플리카의 수가 40 +미만으로 떨어지면 첫 번째 폴리시 _(파드들)_ 가 적용되고 한번에 +4개의 레플리카가 줄어든다. + +`periodSeconds` 는 폴리시가 참(true)으로 유지되어야 하는 기간을 나타낸다. +첫 번째 정책은 1분 내에 최대 4개의 레플리카를 스케일 다운할 수 있도록 허용한다. +두 번째 정책은 현재 레플리카의 최대 10%를 1분 내에 스케일 다운할 수 있도록 허용한다. + +확장 방향에 대해 `selectPolicy` 필드를 확인하여 폴리시 선택을 변경할 수 있다. +레플리카의 수를 최소로 변경할 수 있는 폴리시를 선택하는 `최소(Min)`로 값을 설정한다. +값을 `Disabled` 로 설정하면 해당 방향으로 스케일링이 완전히 +비활성화 된다. + +### 안정화 윈도우 + +안정화 윈도우는 스케일링에 사용되는 메트릭이 계속 변동할 때 레플리카의 플래핑을 +다시 제한하기 위해 사용된다. 안정화 윈도우는 스케일링을 방지하기 위해 과거부터 +계산된 의도한 상태를 고려하는 오토스케일링 알고리즘에 의해 사용된다. +다음의 예시에서 `scaleDown` 에 대해 안정화 윈도우가 지정되어있다. + +```yaml +scaleDown: + stabilizationWindowSeconds: 300 +``` + +메트릭이 대상을 축소해야하는 것을 나타내는 경우 알고리즘은 +이전에 계산된 의도한 상태를 살펴보고 지정된 간격의 최고 값을 사용한다. +위의 예시에서 지난 5분 동안 모든 의도한 상태가 고려된다. + +### 기본 동작 + +사용자 지정 스케일링을 사용하려면 일부 필드를 지정해야 한다. 사용자 정의해야 +하는 값만 지정할 수 있다. 이러한 사용자 지정 값은 기본값과 병합된다. 기본값은 HPA +알고리즘의 기존 동작과 일치한다. + +```yaml +behavior: + scaleDown: + stabilizationWindowSeconds: 300 + policies: + - type: Percent + value: 100 + periodSeconds: 15 + scaleUp: + stabilizationWindowSeconds: 0 + policies: + - type: Percent + value: 100 + periodSeconds: 15 + - type: Pods + value: 4 + periodSeconds: 15 + selectPolicy: Max +``` +안정화 윈도우의 스케일링 다운의 경우 _300_ 초(또는 제공된 +경우`--horizontal-pod-autoscaler-downscale-stabilization` 플래그의 값)이다. 스케일링 다운에서는 현재 +실행 중인 레플리카의 100%를 제거할 수 있는 단일 정책만 있으며, 이는 스케일링 +대상을 최소 허용 레플리카로 축소할 수 있음을 의미한다. +스케일링 업에는 안정화 윈도우가 없다. 메트릭이 대상을 스케일 업해야 한다고 표시된다면 대상이 즉시 스케일 업된다. +두 가지 폴리시가 있다. HPA가 정상 상태에 도달 할 때까지 15초 마다 +4개의 파드 또는 현재 실행 중인 레플리카의 100% 가 추가된다. + +### 예시: 다운스케일 안정화 윈도우 변경 + +사용자 지정 다운스케일 안정화 윈도우를 1분 동안 제공하기 위해 +다음 동작이 HPA에 추가된다. + +```yaml +behavior: + scaleDown: + stabilizationWindowSeconds: 60 +``` + +### 예시: 스케일 다운 비율 제한 + +HPA에 의해 파드가 제거되는 속도를 분당 10%로 제한하기 위해 +다음 동작이 HPA에 추가된다. + +```yaml +behavior: + scaleDown: + policies: + - type: Percent + value: 10 + periodSeconds: 60 +``` + +마지막으로 5개의 파드를 드롭하기 위해 다른 폴리시를 추가하고, 최소 선택 +전략을 추가할 수 있다. + +```yaml +behavior: + scaleDown: + policies: + - type: Percent + value: 10 + periodSeconds: 60 + - type: Pods + value: 5 + periodSeconds: 60 + selectPolicy: Max +``` + +### 예시: 스케일 다운 비활성화 + +`selectPolicy` 의 `Disabled` 값은 주어진 방향으로의 스케일링을 끈다. +따라서 다운 스케일링을 방지하기 위해 다음 폴리시가 사용된다. + +```yaml +behavior: + scaleDown: + selectPolicy: Disabled +``` {{% /capture %}} @@ -286,4 +445,4 @@ API에 접속하려면 클러스터 관리자는 다음을 확인해야 한다. * kubectl 오토스케일 커맨드: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale). * [Horizontal Pod Autoscaler](/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/)의 사용 예제. -{{% /capture %}} +{{% /capture %}} \ No newline at end of file diff --git a/content/ko/docs/tasks/tools/install-minikube.md b/content/ko/docs/tasks/tools/install-minikube.md index b50856ff0892d..82a899bba0866 100644 --- a/content/ko/docs/tasks/tools/install-minikube.md +++ b/content/ko/docs/tasks/tools/install-minikube.md @@ -86,6 +86,12 @@ Minikube에서는 동작하지 않는 스냅 패키지 대신 도커용 `.deb` `--vm-driver=none` 을 사용하기 전에 [이 문서](https://minikube.sigs.k8s.io/docs/reference/drivers/none/)를 참조해서 더 자세한 내용을 본다. {{< /caution >}} +Minikube는 도커 드라이브와 비슷한 `vm-driver=podman` 도 지원한다. 슈퍼사용자 권한(root 사용자)으로 실행되는 Podman은 컨테이너가 시스템에서 사용 가능한 모든 기능에 완전히 접근할 수 있는 가장 좋은 방법이다. + +{{< caution >}} +일반 사용자 계정은 컨테이너를 실행하는 데 필요한 모든 운영 체제 기능에 완전히 접근할 수 없기에 `podman` 드라이버는 컨테이너를 root로 실행해야 한다. +{{< /caution >}} + ### 패키지를 이용하여 Minikube 설치 Minikube를 위한 *실험적인* 패키지가 있다. diff --git a/content/ko/docs/tutorials/_index.md b/content/ko/docs/tutorials/_index.md index e5af6c2afdad9..c279a84c1b6c3 100644 --- a/content/ko/docs/tutorials/_index.md +++ b/content/ko/docs/tutorials/_index.md @@ -22,8 +22,6 @@ content_template: templates/concept * [쿠버네티스 기초](/ko/docs/tutorials/kubernetes-basics/)는 쿠버네티스 시스템을 이해하는데 도움이 되고 기초적인 쿠버네티스 기능을 일부 사용해 볼 수 있는 심도있는 대화형 튜토리얼이다. -* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) - * [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) * [Hello Minikube](/ko/docs/tutorials/hello-minikube/) diff --git a/content/ko/docs/tutorials/hello-minikube.md b/content/ko/docs/tutorials/hello-minikube.md index b4209de162b85..149a2d014996f 100644 --- a/content/ko/docs/tutorials/hello-minikube.md +++ b/content/ko/docs/tutorials/hello-minikube.md @@ -10,12 +10,12 @@ menu:

Ready to get your hands dirty? Build a simple Kubernetes cluster that runs "Hello World" for Node.js.

card: name: tutorials - weight: 10 + weight: 10 --- {{% capture overview %}} -이 튜토리얼에서는 [Minikube](/ko/docs/setup/learning-environment/minikube)와 Katacoda를 이용하여 +이 튜토리얼에서는 [Minikube](/ko/docs/setup/learning-environment/minikube)와 Katacoda를 이용하여 쿠버네티스에서 Node.js 로 작성된 간단한 Hello World 애플리케이션을 어떻게 실행하는지 살펴본다. Katacode는 무료로 브라우저에서 쿠버네티스 환경을 제공한다. @@ -69,7 +69,7 @@ Katacode는 무료로 브라우저에서 쿠버네티스 환경을 제공한다. 쿠버네티스 [*파드*](/ko/docs/concepts/workloads/pods/pod/)는 관리와 네트워킹 목적으로 함께 묶여 있는 하나 이상의 컨테이너 그룹이다. -이 튜토리얼의 파드에는 단 하나의 컨테이너만 있다. 쿠버네티스 +이 튜토리얼의 파드에는 단 하나의 컨테이너만 있다. 쿠버네티스 [*디플로이먼트*](/ko/docs/concepts/workloads/controllers/deployment/)는 파드의 헬스를 검사해서 파드의 컨테이너가 종료되었다면 재시작해준다. 파드의 생성 및 스케일링을 관리하는 방법으로 디플로이먼트를 권장한다. @@ -281,4 +281,4 @@ minikube delete * [애플리케이션 배포](/docs/tasks/run-application/run-stateless-application-deployment/)에 대해서 더 배워 본다. * [서비스 오브젝트](/ko/docs/concepts/services-networking/service/)에 대해서 더 배워 본다. -{{% /capture %}} +{{% /capture %}} \ No newline at end of file diff --git a/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html index 95c22577fd096..6f0fb013d6eb8 100644 --- a/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html +++ b/content/ko/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html @@ -90,7 +90,7 @@

클러스터 다이어그램

-

마스터는 클러스터를 관리하고 노드는 구동되는 애플리케이션을 수용하는데 사용된다.

+

마스터는 실행 중인 애플리케이션을 호스팅하는 데 사용되는 클러스터와 노드를 관리한다.

diff --git a/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html index b205265cb4f49..34a69387dc0bc 100644 --- a/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html +++ b/content/ko/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html @@ -17,6 +17,14 @@
+
+
+

+ 파드는 쿠버네티스 애플리케이션의 기본 실행 단위이다. 각 파드는 클러스터에서 실행중인 워크로드의 일부를 나타낸다. 파드에 대해 더 자세히 알아본다. +

+
+
+
diff --git a/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html b/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html index f8cf20216d82b..aed0258cf697e 100644 --- a/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html +++ b/content/ko/docs/tutorials/kubernetes-basics/expose/expose-intro.html @@ -79,13 +79,8 @@

서비스와 레이블

  • 임베디드된 버전 태그들
  • 태그들을 이용하는 객체들에 대한 분류
  • - -
    -
    -
    -

    여러분은 kubectl 명령에
    --expose 옵션을 사용함으로써 디플로이먼트 생성과 동일 시점에 서비스를 생성할 수 있다.

    -
    +

    diff --git a/content/ko/docs/tutorials/online-training/_index.md b/content/ko/docs/tutorials/online-training/_index.md deleted file mode 100755 index 7d3447fd4eef9..0000000000000 --- a/content/ko/docs/tutorials/online-training/_index.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -title: "온라인 트레이닝 코스" -weight: 20 ---- - diff --git a/content/ko/docs/tutorials/online-training/overview.md b/content/ko/docs/tutorials/online-training/overview.md deleted file mode 100644 index 44f1c69ddc358..0000000000000 --- a/content/ko/docs/tutorials/online-training/overview.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -title: 쿠버네티스 온라인 트레이닝 개요 -content_template: templates/concept ---- - -{{% capture overview %}} - -이 페이지에서는 쿠버네티스 온라인 트레이닝을 제공하는 사이트를 소개한다. - -{{% /capture %}} - -{{% capture body %}} - -* [AIOps Essentials (Autoscaling Kubernetes with Prometheus Metrics) with Hands-On Labs (Linux Academy)](https://linuxacademy.com/devops/training/course/name/using-machine-learning-to-scale-kubernetes-clusters) - -* [Amazon EKS Deep Dive with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/amazon-web-services/training/course/name/amazon-eks-deep-dive) - -* [Cloud Native Certified Kubernetes Administrator (CKA) with Hands-On Labs & Practice Exams (Linux Academy)](https://linuxacademy.com/linux/training/course/name/cloud-native-certified-kubernetes-administrator-cka) - -* [Certified Kubernetes Administrator (CKA) Preparation Course (CloudYuga)](https://cloudyuga.guru/courses/cka-online-self-paced) - -* [Certified Kubernetes Administrator Preparation Course with Practice Tests (KodeKloud)](https://kodekloud.com/p/certified-kubernetes-administrator-with-practice-tests) - -* [Certified Kubernetes Application Developer (CKAD) with Hands-On Labs & Practice Exams (Linux Academy)] (https://linuxacademy.com/containers/training/course/name/certified-kubernetes-application-developer-ckad/) - -* [Certified Kubernetes Application Developer (CKAD) Preparation Course (CloudYuga)](https://cloudyuga.guru/courses/ckad-online-self-paced) - -* [Certified Kubernetes Application Developer Preparation Course with Practice Tests (KodeKloud)](https://kodekloud.com/p/kubernetes-certification-course) - -* [Getting Started with Google Kubernetes Engine (Coursera)](https://www.coursera.org/learn/google-kubernetes-engine) - -* [Getting Started with Kubernetes (Pluralsight)](https://www.pluralsight.com/courses/getting-started-kubernetes) - -* [Getting Started with Kubernetes Clusters on OCI Oracle Kubernetes Engine (OKE) (Learning Library)](https://apexapps.oracle.com/pls/apex/f?p=44785:50:0:::50:P50_EVENT_ID,P50_COURSE_ID:5935,256) - -* [Google Kubernetes Engine Deep Dive (Linux Academy)] (https://linuxacademy.com/google-cloud-platform/training/course/name/google-kubernetes-engine-deep-dive) - -* [Helm Deep Dive with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/helm-deep-dive-part-1) - -* [Hands-on Introduction to Kubernetes (Instruqt)](https://play.instruqt.com/public/topics/getting-started-with-kubernetes) - -* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x) - -* [Kubernetes Essentials with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-essentials) - -* [Kubernetes for the Absolute Beginners with Hands-on Labs (KodeKloud)](https://kodekloud.com/p/kubernetes-for-the-absolute-beginners-hands-on) - -* [Kubernetes Fundamentals (LFS258) (The Linux Foundation)](https://training.linuxfoundation.org/training/kubernetes-fundamentals/) - -* [Kubernetes Quick Start with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-quick-start) - -* [Kubernetes the Hard Way with Hands-On Labs (Linux Academy)](https://linuxacademy.com/linux/training/course/name/kubernetes-the-hard-way) - -* [Kubernetes Security with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-security) - -* [Launch Your First OpenShift Operator with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/containers/training/course/name/red-hat-open-shift) - -* [Learn Kubernetes by Doing - 100% Hands-On Experience (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/learn-kubernetes-by-doing) - -* [Learn Kubernetes using Interactive Hands-on Scenarios (Katacoda)](https://www.katacoda.com/courses/kubernetes/) - -* [Microservice Applications in Kubernetes - 100% Hands-On Experience (Linux Academy)] (https://linuxacademy.com/devops/training/course/name/learn-microservices-by-doing) - -* [Monitoring Kubernetes With Prometheus with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-and-prometheus) - -* [Service Mesh with Istio with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/service-mesh-with-istio-part-1) - -* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) - -* [Self-paced Kubernetes online course (Learnk8s Academy)](https://learnk8s.io/academy) -{{% /capture %}} diff --git a/content/ko/docs/tutorials/services/source-ip.md b/content/ko/docs/tutorials/services/source-ip.md index 23ba647830651..11af03664d29c 100644 --- a/content/ko/docs/tutorials/services/source-ip.md +++ b/content/ko/docs/tutorials/services/source-ip.md @@ -1,6 +1,7 @@ --- title: 소스 IP 주소 이용하기 content_template: templates/tutorial +min-kubernetes-server-version: v1.5 --- {{% capture overview %}} @@ -14,26 +15,38 @@ content_template: templates/tutorial {{% capture prerequisites %}} -{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - -## 용어 +### 용어 이 문서는 다음 용어를 사용한다. -* [NAT](https://en.wikipedia.org/wiki/Network_address_translation): 네트워크 주소 변환 -* [소스 NAT](https://en.wikipedia.org/wiki/Network_address_translation#SNAT): 패킷 상의 소스 IP 주소를 변경함, 보통 노드의 IP 주소 -* [대상 NAT](https://en.wikipedia.org/wiki/Network_address_translation#DNAT): 패킷 상의 대상 IP 주소를 변경함, 보통 파드의 IP 주소 -* [VIP](/ko/docs/concepts/services-networking/service/#가상-ip와-서비스-프록시): 가상 IP 주소, 모든 쿠버네티스 서비스에 할당된 것 같은 -* [Kube-proxy](/ko/docs/concepts/services-networking/service/#가상-ip와-서비스-프록시): 네트워크 데몬으로 모든 노드에서 서비스 VIP 관리를 관리한다. +{{< comment >}} +이 섹션을 현지화하는 경우 대상 지역에 대한 위키피디아 +페이지로 연결한다. +{{< /comment >}} + +[NAT](https://en.wikipedia.org/wiki/Network_address_translation) +: 네트워크 주소 변환 + +[소스 NAT](https://en.wikipedia.org/wiki/Network_address_translation#SNAT) +: 패킷 상의 소스 IP 주소를 변경함, 보통 노드의 IP 주소 + +[대상 NAT](https://en.wikipedia.org/wiki/Network_address_translation#DNAT) +: 패킷 상의 대상 IP 주소를 변경함, 보통 파드의 IP 주소 + +[VIP](/ko/docs/concepts/services-networking/service/#가상-ip와-서비스-프록시) +: 가상 IP 주소, 모든 쿠버네티스 서비스에 할당된 것 같은 +[Kube-proxy](/ko/docs/concepts/services-networking/service/#가상-ip와-서비스-프록시) +: 네트워크 데몬으로 모든 노드에서 서비스 VIP 관리를 관리한다. -## 전제 조건 +### 전제 조건 + +{{< include "task-tutorial-prereqs.md" >}} -이 문서의 예시를 실행하기 위해서 쿠버네티스 1.5 이상의 동작하는 클러스터가 필요하다. 이 예시는 HTTP 헤더로 수신한 요청의 소스 IP 주소를 회신하는 작은 nginx 웹 서버를 이용한다. 다음과 같이 생성할 수 있다. -```console +```shell kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4 ``` 출력은 다음과 같다. @@ -54,12 +67,13 @@ deployment.apps/source-ip-app created {{% capture lessoncontent %}} -## Type=ClusterIP인 서비스에서 소스 IP +## `Type=ClusterIP` 인 서비스에서 소스 IP -쿠버네티스 1.2부터 기본으로 제공하는 -[iptables 모드](/ko/docs/concepts/services-networking/service/#proxy-mode-iptables)로 운영하는 경우 -클러스터 내에서 클러스터 IP로 패킷을 보내면 소스 NAT를 통과하지 않는다. -Kube-proxy는 이 모드를 `proxyMode` 엔드포인트를 통해 노출한다. +[iptables 모드](/ko/docs/concepts/services-networking/service/#proxy-mode-iptables) +(기본값)에서 kube-proxy를 운영하는 경우 클러스터 내에서 +클러스터IP로 패킷을 보내면 +소스 NAT를 통과하지 않는다. kube-proxy가 실행중인 노드에서 +`http://localhost:10249/proxyMode` 를 입력해서 kube-proxy 모드를 조회할 수 있다. ```console kubectl get nodes @@ -71,9 +85,11 @@ kubernetes-node-6jst Ready 2h v1.13.0 kubernetes-node-cx31 Ready 2h v1.13.0 kubernetes-node-jj1t Ready 2h v1.13.0 ``` -한 노드의 프록시 모드를 확인한다. -```console -kubernetes-node-6jst $ curl localhost:10249/proxyMode + +한 노드의 프록시 모드를 확인한다. (kube-proxy는 포트 10249에서 수신대기한다.) +```shell +# 질의 할 노드의 쉘에서 이것을 실행한다. +curl localhost:10249/proxyMode ``` 출력은 다음과 같다. ``` @@ -82,23 +98,40 @@ iptables 소스 IP 애플리케이션을 통해 서비스를 생성하여 소스 IP 주소 보존 여부를 테스트할 수 있다. -```console -$ kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080 +```shell +kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080 +``` +출력은 다음과 같다. +``` service/clusterip exposed - -$ kubectl get svc clusterip +``` +```shell +kubectl get svc clusterip +``` +출력은 다음과 같다. +``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE clusterip ClusterIP 10.0.170.92 80/TCP 51s ``` 그리고 동일한 클러스터의 파드에서 `클러스터IP`를 치면: -```console -$ kubectl run busybox -it --image=busybox --restart=Never --rm +```shell +kubectl run busybox -it --image=busybox --restart=Never --rm +``` +출력은 다음과 같다. +``` Waiting for pod default/busybox to be running, status is Pending, pod ready: false If you don't see a command prompt, try pressing enter. -# ip addr +``` +그런 다음 해당 파드 내에서 명령을 실행할 수 있다. + +```shell +# "kubectl run" 으로 터미널 내에서 이것을 실행한다. +ip addr +``` +``` 1: lo: mtu 65536 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo @@ -111,26 +144,38 @@ If you don't see a command prompt, try pressing enter. valid_lft forever preferred_lft forever inet6 fe80::188a:84ff:feb0:26a5/64 scope link valid_lft forever preferred_lft forever +``` -# wget -qO - 10.0.170.92 +그런 다음 `wget` 을 사용해서 로컬 웹 서버에 쿼리한다. +```shell +# 10.0.170.92를 파드의 IPv4 주소로 변경한다. +wget -qO - 10.0.170.92 +``` +``` CLIENT VALUES: client_address=10.244.3.8 command=GET ... ``` -client_address는 클라이언트 파드와 서버 파드가 같은 노드 또는 다른 노드에 있는지 여부에 관계없이 항상 클라이언트 파드의 IP 주소이다. +`client_address` 는 클라이언트 파드와 서버 파드가 같은 노드 또는 다른 노드에 있는지 여부에 관계없이 항상 클라이언트 파드의 IP 주소이다. -## Type=NodePort인 서비스에서 소스 IP +## `Type=NodePort` 인 서비스에서 소스 IP -쿠버네티스 1.5부터 [Type=NodePort](/ko/docs/concepts/services-networking/service/#nodeport)인 서비스로 보내진 패킷은 +[`Type=NodePort`](/ko/docs/concepts/services-networking/service/#nodeport)인 +서비스로 보내진 패킷은 소스 NAT가 기본으로 적용된다. `NodePort` 서비스를 생성하여 이것을 테스트할 수 있다. -```console -$ kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort +```shell +kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort +``` +출력은 다음과 같다. +``` service/nodeport exposed +``` -$ NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services nodeport) -$ NODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="IPAddress")].address }') +```shell +NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services nodeport) +NODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="IPAddress")].address }') ``` 클라우드 공급자 상에서 실행한다면, @@ -138,8 +183,11 @@ $ NODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type= 이제 위에 노드 포트로 할당받은 포트를 통해 클러스터 외부에서 서비스에 도달할 수 있다. -```console -$ for node in $NODES; do curl -s $node:$NODEPORT | grep -i client_address; done +```shell +for node in $NODES; do curl -s $node:$NODEPORT | grep -i client_address; done +``` +출력은 다음과 유사하다. +``` client_address=10.180.1.1 client_address=10.240.0.5 client_address=10.240.0.3 @@ -169,26 +217,33 @@ client_address=10.240.0.3 ``` -이를 피하기 위해 쿠버네티스는 클라이언트 소스 IP 주소를 보존하는 기능이 있다. -[(기능별 가용성은 여기에)](/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip). -`service.spec.externalTrafficPolicy`을 `Local`로 하면 -오직 로컬 엔드포인트로만 프록시 요청하고 다른 노드로 트래픽 전달하지 않으므로, -원본 소스 IP 주소를 보존한다. -만약 로컬 엔드 포인트가 없다면, 그 노드로 보내진 패킷은 버려지므로 +이를 피하기 위해 쿠버네티스는 +[클라이언트 소스 IP 주소를 보존](/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)하는 기능이 있다. +`service.spec.externalTrafficPolicy` 의 값을 `Local` 로 하면 +오직 로컬 엔드포인트로만 프록시 요청하고 +다른 노드로 트래픽 전달하지 않는다. 이 방법은 원본 +소스 IP 주소를 보존한다. 만약 로컬 엔드 포인트가 없다면, +그 노드로 보내진 패킷은 버려지므로 패킷 처리 규칙에서 정확한 소스 IP 임을 신뢰할 수 있으므로, 패킷을 엔드포인트까지 전달할 수 있다. 다음과 같이 `service.spec.externalTrafficPolicy` 필드를 설정하자. -```console -$ kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}' +```shell +kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}' +``` +출력은 다음과 같다. +``` service/nodeport patched ``` 이제 다시 테스트를 실행해보자. -```console -$ for node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; done +```shell +for node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; done +``` +출력은 다음과 유사하다. +``` client_address=104.132.1.79 ``` @@ -202,7 +257,6 @@ client_address=104.132.1.79 * 클라이언트는 패킷을 엔드포인트를 가진 `node1:nodePort` 보낸다. * node1은 패킷을 올바른 소스 IP 주소로 엔드포인트로 라우팅 한다. - 시각적으로 ``` @@ -219,10 +273,11 @@ client_address=104.132.1.79 -## Type=LoadBalancer인 서비스에서 소스 IP +## `Type=LoadBalancer` 인 서비스에서 소스 IP -쿠버네티스 1.5 부터 [Type=LoadBalancer](/ko/docs/concepts/services-networking/service/#loadbalancer)인 서비스로 -보낸 패킷은 소스 NAT를 기본으로 하는데, `Ready` 상태로 모든 스케줄된 모든 쿠버네티스 노드는 +[`Type=LoadBalancer`](/ko/docs/concepts/services-networking/service/#loadbalancer)인 +서비스로 보낸 패킷은 소스 NAT를 기본으로 하는데, `Ready` 상태로 +모든 스케줄된 모든 쿠버네티스 노드는 로드 밸런싱 트래픽에 적합하다. 따라서 엔드포인트가 없는 노드에 패킷이 도착하면 시스템은 엔드포인트를 *포함한* 노드에 프록시를 수행하고 패킷 상에서 노드의 IP 주소로 소스 IP 주소를 변경한다 @@ -230,15 +285,31 @@ client_address=104.132.1.79 로드밸런서를 통해 source-ip-app을 노출하여 테스트할 수 있다. -```console -$ kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer +```shell +kubectl expose deployment source-ip-app --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer +``` +출력은 다음과 같다. +``` service/loadbalancer exposed +``` -$ kubectl get svc loadbalancer +서비스의 IP 주소를 출력한다. +```console +kubectl get svc loadbalancer +``` +다음과 유사하게 출력된다. +``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -loadbalancer LoadBalancer 10.0.65.118 104.198.149.140 80/TCP 5m +loadbalancer LoadBalancer 10.0.65.118 203.0.113.140 80/TCP 5m +``` -$ curl 104.198.149.140 +다음으로 이 서비스의 외부 IP에 요청을 전송한다. + +```shell +curl 203.0.113.140 +``` +다음과 유사하게 출력된다. +``` CLIENT VALUES: client_address=10.240.0.5 ... @@ -265,51 +336,74 @@ health check ---> node 1 node 2 <--- health check 이것은 어노테이션을 설정하여 테스트할 수 있다. -```console -$ kubectl patch svc loadbalancer -p '{"spec":{"externalTrafficPolicy":"Local"}}' +```shell +kubectl patch svc loadbalancer -p '{"spec":{"externalTrafficPolicy":"Local"}}' ``` 쿠버네티스에 의해 `service.spec.healthCheckNodePort` 필드가 즉각적으로 할당되는 것을 봐야 한다. -```console -$ kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort +```shell +kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort +``` +출력은 다음과 유사하다. +```yaml healthCheckNodePort: 32122 ``` `service.spec.healthCheckNodePort` 필드는 `/healthz`에서 헬스 체크를 제공하는 모든 노드의 포트를 가르킨다. 이것을 테스트할 수 있다. +```shell +kubectl get pod -o wide -l run=source-ip-app +``` +출력은 다음과 유사하다. ``` -$ kubectl get pod -o wide -l run=source-ip-app NAME READY STATUS RESTARTS AGE IP NODE source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-node-6jst +``` -kubernetes-node-6jst $ curl localhost:32122/healthz +다양한 노드에서 `/healthz` 엔드포인트를 가져오려면 `curl` 을 사용한다. +```shell +# 선택한 노드에서 로컬로 이것을 실행한다. +curl localhost:32122/healthz +``` +``` 1 Service Endpoints found +``` -kubernetes-node-jj1t $ curl localhost:32122/healthz +다른 노드에서는 다른 결과를 얻을 수 있다. +```shell +# 선택한 노드에서 로컬로 이것을 실행한다. +curl localhost:32122/healthz +``` +``` No Service Endpoints Found ``` -마스터에서 실행 중인 서비스 컨트롤러는 필요시에 클라우드 로드밸런서를 할당할 책임이 있다. -또한, 각 노드에 HTTP 헬스 체크를 이 포트와 경로로 할당한다. -헬스체크가 실패한 엔드포인트를 포함하지 않은 2개 노드에서 10초를 기다리고 -로드밸런서 IP 주소로 curl 하자. +{{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}}에서 +실행중인 컨트롤러는 클라우드 로드 밸런서를 할당한다. 또한 같은 컨트롤러는 +각 노드에서 포트/경로(port/path)를 가르키는 HTTP 상태 확인도 할당한다. +엔드포인트가 없는 2개의 노드가 상태 확인에 실패할 +때까지 약 10초간 대기한 다음, +`curl` 을 사용해서 로드밸런서의 IPv4 주소를 쿼리한다. -```console -$ curl 104.198.149.140 +```shell +curl 203.0.113.140 +``` +출력은 다음과 유사하다. +``` CLIENT VALUES: -client_address=104.132.1.79 +client_address=198.51.100.79 ... ``` -__크로스 플랫폼 지원__ +## 크로스-플랫폼 지원 -쿠버네티스 1.5부터 Type=LoadBalancer 서비스를 통한 -소스 IP 주소 보존을 지원하지만, -이는 클라우드 공급자(GCE, Azure)의 하위 집합으로 구현되어 있다. 실행 중인 클라우드 공급자에서 -몇 가지 다른 방법으로 로드밸런서를 요청하자. +일부 클라우드 공급자만 `Type=LoadBalancer` 를 사용하는 +서비스를 통해 소스 IP 보존을 지원한다. +실행 중인 클라우드 공급자에서 몇 가지 다른 방법으로 +로드밸런서를 요청한다. 1. 클라이언트 연결을 종료하고 새 연결을 여는 프록시를 이용한다. 이 경우 소스 IP 주소는 클라이언트 IP 주소가 아니고 @@ -320,34 +414,35 @@ __크로스 플랫폼 지원__ 끝나는 패킷 전달자를 이용한다. 첫 번째 범주의 로드밸런서는 진짜 클라이언트 IP를 통신하기 위해 -HTTP [X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For) 헤더나 -[프록시 프로토콜](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt)같이 로드밸런서와 -백엔드 간에 합의된 프로토콜을 사용해야 한다. +HTTP [Forwarded]](https://tools.ietf.org/html/rfc7239#section-5.2) +또는 [X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For) +헤더 또는 +[proxy protocol](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt)과 +같은 로드밸런서와 백엔드 간에 합의된 프로토콜을 사용해야 한다. 두 번째 범주의 로드밸런서는 서비스의 `service.spec.healthCheckNodePort` 필드의 저장된 포트를 가르키는 -간단한 HTTP 헬스 체크를 생성하여 +HTTP 헬스 체크를 생성하여 위에서 설명한 기능을 활용할 수 있다. {{% /capture %}} {{% capture cleanup %}} -서비스를 삭제하자. +서비스를 삭제한다. -```console -$ kubectl delete svc -l run=source-ip-app +```shell +kubectl delete svc -l run=source-ip-app ``` -디플로이먼트와 리플리카 셋과 파드를 삭제하자. +디플로이먼트, 레플리카셋 그리고 파드를 삭제한다. -```console -$ kubectl delete deployment source-ip-app +```shell +kubectl delete deployment source-ip-app ``` {{% /capture %}} {{% capture whatsnext %}} -* [서비스를 통한 애플리케이션 연결하기](/ko/docs/concepts/services-networking/connect-applications-service/)에 대해 더 공부하기 -* [부하분산](/docs/user-guide/load-balancer)에 대해 더 공부하기 +* [서비스를 통한 애플리케이션 연결하기](/ko/docs/concepts/services-networking/connect-applications-service/)에 더 자세히 본다. +* 어떻게 [외부 로드밸런서 생성](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/)하는지 본다. {{% /capture %}} - diff --git a/content/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md index 603a857a587a6..ae4f74e90c1e2 100644 --- a/content/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md +++ b/content/ko/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md @@ -232,7 +232,7 @@ kubectl apply -k ./ {{% capture whatsnext %}} * [인트로스펙션과 디버깅](/docs/tasks/debug-application-cluster/debug-application-introspection/)를 알아보자. -* [잡](/docs/concepts/workloads/controllers/jobs-run-to-completion/)를 알아보자. +* [잡](/ko/docs/concepts/workloads/controllers/jobs-run-to-completion/)를 알아보자. * [포트 포워딩](/ko/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)를 알아보자. * 어떻게 [컨테이너에서 셸을 사용하는지](/docs/tasks/debug-application-cluster/get-shell-running-container/)를 알아보자. diff --git a/content/ko/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md b/content/ko/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md index 00900d5973822..6d6ddf4e1eeae 100644 --- a/content/ko/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md +++ b/content/ko/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md @@ -110,7 +110,7 @@ Elastic Cloud의 Elasticsearch 서비스로 연결한다면 **관리 서비스** 1. ELASTICSEARCH_USERNAME 1. KIBANA_HOST -이 정보를 Elasticsearch 클러스터와 Kibana 호스트에 지정한다. 여기 예시가 있다. +이 정보를 Elasticsearch 클러스터와 Kibana 호스트에 지정한다. 여기 예시(또는 [*이 구성*](https://stackoverflow.com/questions/59892896/how-to-connect-from-minikube-to-elasticsearch-installed-on-host-local-developme/59892897#59892897)을 본다)가 있다. #### `ELASTICSEARCH_HOSTS` 1. Elastic의 Elasticsearch Helm 차트에서 노드 그룹(nodeGroup). diff --git a/content/ko/includes/federated-task-tutorial-prereqs.md b/content/ko/includes/federated-task-tutorial-prereqs.md deleted file mode 100644 index b254407a676a3..0000000000000 --- a/content/ko/includes/federated-task-tutorial-prereqs.md +++ /dev/null @@ -1,5 +0,0 @@ -This guide assumes that you have a running Kubernetes Cluster Federation installation. -If not, then head over to the [federation admin guide](/docs/tutorials/federation/set-up-cluster-federation-kubefed/) to learn how to -bring up a cluster federation (or have your cluster administrator do this for you). -Other tutorials, such as Kelsey Hightower's [Federated Kubernetes Tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation), -might also help you create a Federated Kubernetes cluster. diff --git a/content/ko/training/_index.html b/content/ko/training/_index.html new file mode 100644 index 0000000000000..4e2d914420c76 --- /dev/null +++ b/content/ko/training/_index.html @@ -0,0 +1,108 @@ +--- +title: 교육 +bigheader: 쿠버네티스 교육과 인증 +abstract: 교육 프로그램, 인증 그리고 파트너 +layout: basic +cid: training +class: training +--- + +
    +
    +
    +

    클라우드 네이티브 커리어를 구축하세요

    +

    쿠버네티스는 클라우드 네이티브 무브먼트의 핵심입니다. 리눅스 재단이 제공하는 교육과 인증 프로그램을 통해 커리어에 투자하고, 쿠버네티스를 배우며, 클라우드 네이티브 프로젝트를 성공적으로 수행하세요.

    +
    +
    +
    + +
    +
    +
    +

    edX에서 무료 강좌 수강하기

    +
    +
    +
    +
    +
    + 쿠버네티스 소개
     
    +
    +

    쿠버네티스를 배우고 싶습니까? 컨테이너화된 애플리케이션의 관리를 위한 이 강력한 시스템에 대해 심도있는 입문 교육을 받으세요.

    +
    + 강좌로 이동하기 +
    +
    +
    +
    +
    + 클라우드 인프라 기술 소개 +
    +

    오픈 소스의 리더인 리눅스 재단으로부터 직접 클라우드를 구축하고 관리하는 기술의 기초를 배우세요.

    +
    + 강좌로 이동하기 +
    +
    +
    +
    +
    + 리눅스 소개 +
    +

    리눅스를 배운 적이 없습니까? 새로 다시 배우기를 원하시나요? 주요 리눅스 배포판에 걸쳐서 그래픽 인터페이스와 커멘드 라인을 모두 사용할 수 있는 유용한 실무 지식을 개발하세요.

    +
    + 강좌로 이동하기 +
    +
    +
    +
    + +
    +
    +
    +

    리눅스 재단과 함께 배우기

    +

    리눅스 재단은 쿠버네티스 애플리케이션 개발과 운영 라이프사이클의 모든 측면에 대해 강사 주도와 자기 주도 학습 과정을 제공합니다.

    +

    + 강좌 보기 +
    +
    +
    + +
    +
    +
    +

    쿠버네티스 공인 자격 획득하기

    +
    +
    +
    +
    + 공인 쿠버네티스 애플리케이션 개발자(Certified Kubernetes Application Developer, CKAD) +
    +

    공인 쿠버네티스 애플리케이션 개발자 시험은 사용자가 쿠버네티스의 클라우드 네이티브 애플리케이션을 디자인, 구축, 구성과 노출을 할 수 있음을 인증합니다.

    +
    + 인증으로 이동하기 +
    +
    +
    +
    +
    + 공인 쿠버네티스 관리자(Certified Kubernetes Administrator, CKA) +
    +

    공인 쿠버네티스 관리자 프로그램은 CKA가 쿠버네티스 관리자 직무을 수행할 수 있는 기술, 지식과 역량을 갖추고 있음을 보장합니다.

    +
    + 인증으로 이동하기 +
    +
    +
    +
    + +
    +
    +
    +

    쿠버네티스 교육 파트너

    +

    쿠버네티스 교육 파트너 네트워크는 쿠버네티스 및 클라우드 네이티브 프로젝트를 위한 교육 서비스를 제공합니다.

    +
    +
    +
    + + +
    +
    diff --git a/content/pl/training/_index.html b/content/pl/training/_index.html new file mode 100644 index 0000000000000..37c521187f5f6 --- /dev/null +++ b/content/pl/training/_index.html @@ -0,0 +1,108 @@ +--- +title: Szkolenia +bigheader: Kubernetes – szkolenia i certyfikacja +abstract: Programy szkoleniowe, certyfikacja i partnerzy. +layout: basic +cid: training +class: training +--- + +
    +
    +
    +

    Kariera Cloud Native

    +

    Kubernetes stanowi serce całego ruchu cloud native. Korzystając ze szkoleń i certyfikacji oferowanych przez Linux Foundation i naszych partnerów zainwestujesz w swoją karierę, nauczysz się korzystać z Kubernetesa i sprawisz, że Twoje projekty cloud native osiągną sukces.

    +
    +
    +
    + +
    +
    +
    +

    Darmowe kursy na edX

    +
    +
    +
    +
    +
    + Wprowadzenie do Kubernetesa
     
    +
    +

    Chcesz nauczyć się Kubernetesa? Oto solidne podstawy do poznania tego potężnego systemu zarządzania aplikacjami w kontenerach.

    +
    + Przejdź do kursu +
    +
    +
    +
    +
    + Wprowadzenie do technologii infrastruktur chmurowych +
    +

    Poznaj podstawy budowy i zarządzania technologiami chmurowymi bezpośrednio od Linux Foundation – lidera otwartego oprogramowania.

    +
    + Przejdź do kursu +
    +
    +
    +
    +
    + Wprowadzenie do Linuksa +
    +

    Nigdy nie uczyłeś sie o Linuksie? Chcesz odświeżyć swoją wiedzę? Zdobądź praktyczną wiedzę używając interfejsów graficznych oraz linii poleceń w najważniejszych dystrybucjach Linuksa.

    +
    + Przejdź do kursu +
    +
    +
    +
    + +
    +
    +
    +

    Nauka z Linux Foundation

    +

    Linux Foundation oferuje szkolenia prowadzone przez instruktora oraz szkolenia samodzielne obejmujące wszystkie aspekty rozwijania i zarządzania aplikacjami na Kubrnetesie.

    +

    + Zobacz ofertę szkoleń +
    +
    +
    + +
    +
    +
    +

    Uzyskaj certyfikat Kubernetes

    +
    +
    +
    +
    + Certified Kubernetes Application Developer (CKAD) +
    +

    Egzamin na certyfikowanego dewelopera aplikacji (Certified Kubernetes Application Developer) potwierdza umiejętności projektowania, budowania, konfigurowania i udostępniania "rdzennych" aplikacji dla Kubernetesa.

    +
    + Przejdź do certyfikacji +
    +
    +
    +
    +
    + Certified Kubernetes Administrator (CKA) +
    +

    Program certyfikowanego administratora Kubernetes (Certified Kubernetes Administrator) potwierdza umiejętności, wiedzę i kompetencje do podejmowania się zadań administracji Kubernetesem.

    +
    + Przejdź do certyfikacji +
    +
    +
    +
    + +
    +
    +
    +

    Partnerzy szkoleniowi Kubernetes

    +

    Nasza sieć partnerów oferuje usługi szkoleniowe z Kubernetesa i projektów cloud native.

    +
    +
    +
    + + +
    +
    diff --git a/content/pt/_index.html b/content/pt/_index.html index 30a826897a241..62cfd340d433e 100644 --- a/content/pt/_index.html +++ b/content/pt/_index.html @@ -1,6 +1,6 @@ --- -title: "Orquestração de contêiner pronto para produção" -abstract: "Implantação, dimensionamento e gerenciamento de contêiner automatizado" +title: "Orquestração de contêineres prontos para produção" +abstract: "Implantação, dimensionamento e gerenciamento automatizado de contêineres" cid: home --- diff --git a/content/pt/docs/contribute/_index.md b/content/pt/docs/contribute/_index.md new file mode 100644 index 0000000000000..0e947a36a2a91 --- /dev/null +++ b/content/pt/docs/contribute/_index.md @@ -0,0 +1,61 @@ +--- +content_template: templates/concept +title: Contribua com o Kubernetes docs +linktitle: Contribute +main_menu: true +weight: 80 +--- + +{{% capture overview %}} + +Caso você gostaria de contribuir com a documentação ou o site do Kubernetes, +ficamos felizes em ter sua ajuda! Qualquer pessoa pode contribuir, seja você novo no +projeto ou se você já esta no mercado há muito tempo. Além disso, Se você se identifica como +desenvolvedor, usuário final ou alguém que simplesmente não suporta ver erros de digitação. +{{% /capture %}} + +{{% capture body %}} + +## Começando + +Qualquer pessoa pode abrir uma issue descrevendo o problema ou melhorias desejadas com a documentação ou contribuir com uma alteração e uma solicitação de mudança (Pull Request - PR). +Algumas tarefas exigem mais confiança e precisam de mais acesso na organização Kubernetes. +Veja [Participando do SIG Docs](/docs/contribute/participating/) para mais detalhes sobre +as funções e permissões. + +A documentação do Kubernetes reside em um repositório do GitHub. Nós damos as boas-vindas +a todas as contribuições, mas você vai precisa estar familiarizado com o uso básico de git e GitHub para +operar efetivamente na comunidade Kubernetes. + +Para se envolver com a documentação: + +1. Assine o [Contrato de Licença de Colaborador](https://github.com/kubernetes/community/blob/master/CLA.md) do CNCF. +2. Familiarize-se com o [repositório de documentação](https://github.com/kubernetes/website) e o [gerador de site estático](https://gohugo.io) hugo. +3. Certifique-se de entender os processos básicos para [melhorar o conteúdo](https://kubernetes.io/docs/contribute/start/#improve-existing-content) e [revisar alterações](https://kubernetes.io/docs/contribute/start/#review-docs-pull-requests). + +## Melhores Práticas recomendadas para contribuições + +- Escreva mensagens GIT claras e significativas. +- Certifique-se de incluir _Github Special Keywords_ que faz referência a issue e o fecha automaticamente quando o PR é mergeado. +- Quando você faz uma pequena alteração em um PR, como corrigir um erro de digitação, qualquer alteração de estilo ou gramática, certifique-se de esmagar seus commits (squash) para não obter um grande número de commits por uma alteração relativamente pequena. +- Certifique-se de incluir uma boa descrição de PR explicando as alterações no código, o motivo de alterar um trecho de código e garantir que haja informações suficientes para o revisor entender seu PR. +- Leituras adicionais: + - [chris.beams.io/posts/git-commit/](https://chris.beams.io/posts/git-commit/) + - [github.com/blog/1506-closing-issues-via-pull-requests ](https://github.com/blog/1506-closing-issues-via-pull-requests ) + - [davidwalsh.name/squash-commits-git ](https://davidwalsh.name/squash-commits-git ) + +## Outras maneiras de contribuir + +- Para contribuir com a comunidade Kubernetes por meio de fóruns on-line, como Twitter ou Stack Overflow, ou aprender sobre encontros locais e eventos do Kubernetes, visite o a area de [comunidade Kubernetes](/community/). +- Para contribuir com o desenvolvimento de novas funções, leia o [cheatsheet do colaborador](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet) para começar. + +{{% /capture %}} + +{{% capture whatsnext %}} + +- Para obter mais informações sobre os conceitos básicos de contribuição para a documentação, leia [Comece a contribuir](/docs/contribute/start/). +- Siga o [Guia de estilo de documentação do Kubernetes](/docs/contribute/style/style-guide/) ao propor mudanças. +- Para mais informações sobre o SIG Docs, leia [Participando do SIG Docs](/docs/contribute/participating/). +- Para mais informações sobre a localização de documentos do Kubernetes, leia [Localização da documentação do Kubernetes](/docs/contribute/localization/). + +{{% /capture %}} diff --git a/content/pt/docs/reference/_index.md b/content/pt/docs/reference/_index.md new file mode 100644 index 0000000000000..1c73816a69f19 --- /dev/null +++ b/content/pt/docs/reference/_index.md @@ -0,0 +1,52 @@ +--- +title: Referência +approvers: +- chenopis +linkTitle: "Referência" +main_menu: true +weight: 70 +content_template: templates/concept +--- + +{{% capture overview %}} + +Esta seção da documentação do Kubernetes contém referências. + +{{% /capture %}} + +{{% capture body %}} + +## Referência da API + +* [Visão geral da API do Kubernetes](/docs/reference/using-api/api-overview/) - Visão geral da API para Kubernetes. +* [Referência da API Kubernetes {{< latest-version >}}](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/) + +## Biblioteca de clientes da API + +Para chamar a API Kubernetes de uma linguagem de programação, você pode usar +[bibliotecas de clientes](/docs/reference/using-api/client-libraries/). Bibliotecas oficialmente suportadas: + +- [Biblioteca do cliente Kubernetes em Go](https://github.com/kubernetes/client-go/) +- [Biblioteca do cliente Kubernetes em Python](https://github.com/kubernetes-client/python) +- [Biblioteca do cliente Kubernetes em Java](https://github.com/kubernetes-client/java) +- [Biblioteca do cliente Kubernetes em JavaScript](https://github.com/kubernetes-client/javascript) + +## Referência da CLI + +* [kubectl](/docs/reference/kubectl/overview/) - Ferramenta CLI principal para executar comandos e gerenciar clusters do Kubernetes. + * [JSONPath](/docs/reference/kubectl/jsonpath/) - Guia de sintaxe para usar [Expressões JSONPath](http://goessner.net/articles/JsonPath/) com o kubectl. +* [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) - Ferramenta CLI para provisionar facilmente um cluster Kubernetes seguro. + +## Referência de configuração + +* [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - O principal *agente do nó* que é executado em cada nó. O kubelet usa um conjunto de PodSpecs e garante que os contêineres descritos estejam funcionando e saudáveis. +* [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - API REST que valida e configura dados para objetos de API, como pods, serviços, controladores de replicação. +* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Daemon que incorpora os principais loops de controle enviados com o Kubernetes. +* [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - É possível fazer o encaminhamento de fluxo TCP/UDP de forma simples ou utilizando o algoritimo de Round Robin encaminhando através de um conjunto de back-ends. +* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - Agendador que gerencia disponibilidade, desempenho e capacidade. + +## Documentos de design + +Um arquivo dos documentos de design para as funcionalidades do Kubernetes. Bons pontos de partida são [Arquitetura Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) e [Visão geral do design do Kubernetes](https://git.k8s.io/community/contributors/design-proposals). + +{{% /capture %}} diff --git a/content/pt/docs/reference/kubectl/_index.md b/content/pt/docs/reference/kubectl/_index.md new file mode 100644 index 0000000000000..7b6c2d720b12a --- /dev/null +++ b/content/pt/docs/reference/kubectl/_index.md @@ -0,0 +1,5 @@ +--- +title: "kubectl CLI" +weight: 60 +--- + diff --git a/content/pt/docs/reference/kubectl/cheatsheet.md b/content/pt/docs/reference/kubectl/cheatsheet.md new file mode 100644 index 0000000000000..9cdf34dc37a6e --- /dev/null +++ b/content/pt/docs/reference/kubectl/cheatsheet.md @@ -0,0 +1,390 @@ +--- +title: kubectl Cheat Sheet +reviewers: +- erictune +- krousey +- clove +content_template: templates/concept +card: + name: reference + weight: 30 +--- + +{{% capture overview %}} + +Veja também: [Visão geral do Kubectl](/docs/reference/kubectl/overview/) e [JsonPath Guide](/docs/reference/kubectl/jsonpath). + +Esta página é uma visão geral do comando `kubectl`. + +{{% /capture %}} + +{{% capture body %}} + +# kubectl - Cheat Sheet + +## Kubectl Autocomplete + +### BASH + +```bash +source <(kubectl completion bash) # configuração de autocomplete no bash do shell atual, o pacote bash-completion precisa ter sido instalado primeiro. +echo "source <(kubectl completion bash)" >> ~/.bashrc # para adicionar o autocomplete permanentemente no seu shell bash. +``` + +Você também pode usar uma abreviação para o atalho para `kubectl` que também funciona com o auto completar: + +```bash +alias k=kubectl +complete -F __start_kubectl k +``` + +### ZSH + +```bash +source <(kubectl completion zsh) # configuração para usar autocomplete no terminal zsh no shell atual +echo "if [ $commands[kubectl] ]; then source <(kubectl completion zsh); fi" >> ~/.zshrc # adicionar auto completar permanentemente para o seu shell zsh +``` + +## Contexto e Configuração do Kubectl + +Defina com qual cluster Kubernetes o `kubectl` se comunica e modifique os detalhes da configuração. +Veja a documentação [Autenticando entre clusters com o kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) para +informações detalhadas do arquivo de configuração. + +```bash +kubectl config view # Mostrar configurações do kubeconfig mergeadas. + +# use vários arquivos kubeconfig ao mesmo tempo e visualize a configuração mergeada +KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 + +kubectl config view + +# obtenha a senha para o usuário e2e +kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' + +kubectl config view -o jsonpath='{.users[].name}' # exibir o primeiro usuário +kubectl config view -o jsonpath='{.users[*].name}' # obtenha uma lista de usuários +kubectl config get-contexts # exibir lista de contextos +kubectl config current-context # exibir o contexto atual +kubectl config use-context my-cluster-name # defina o contexto padrão como my-cluster-name + +# adicione um novo cluster ao seu kubeconfig que suporte autenticação básica +kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword + +# salve o namespace permanentemente para todos os comandos subsequentes do kubectl nesse contexto. +kubectl config set-context --current --namespace=ggckad-s2 + +# defina um contexto utilizando um nome de usuário e o namespace. +kubectl config set-context gce --user=cluster-admin --namespace=foo \ + && kubectl config use-context gce + +kubectl config unset users.foo # excluir usuário foo +``` + +## Aplicar +`apply` gerencia aplicações através de arquivos que definem os recursos do Kubernetes. Ele cria e atualiza recursos em um cluster através da execução `kubectl apply`. +Esta é a maneira recomendada de gerenciar aplicações Kubernetes em ambiente de produção. Veja a [documentação do Kubectl](https://kubectl.docs.kubernetes.io). + +## Criando objetos + +Manifestos Kubernetes podem ser definidos em YAML ou JSON. A extensão de arquivo `.yaml`, +`.yml`, e `.json` pode ser usado. + +```bash +kubectl apply -f ./my-manifest.yaml # criar recurso(s) +kubectl apply -f ./my1.yaml -f ./my2.yaml # criar a partir de vários arquivos +kubectl apply -f ./dir # criar recurso(s) em todos os arquivos de manifesto no diretório +kubectl apply -f https://git.io/vPieo # criar recurso(s) a partir de URL +kubectl create deployment nginx --image=nginx # iniciar uma única instância do nginx +kubectl explain pods,svc # obtenha a documentação de manifesto do pod + +# Crie vários objetos YAML a partir de stdin +cat < pod.yaml + +kubectl attach my-pod -i # Anexar ao contêiner em execução +kubectl port-forward my-pod 5000:6000 # Ouça na porta 5000 na máquina local e encaminhe para a porta 6000 no my-pod +kubectl exec my-pod -- ls / # Executar comando no pod existente (1 contêiner) +kubectl exec my-pod -c my-container -- ls / # Executar comando no pod existente (pod com vários contêineres) +kubectl top pod POD_NAME --containers # Mostrar métricas para um determinado pod e seus contêineres +``` + +## Interagindo com Nós e Cluster + +```bash +kubectl cordon my-node # Marcar o nó my-node como não agendável +kubectl drain my-node # Drene o nó my-node na preparação para manutenção +kubectl uncordon my-node # Marcar nó my-node como agendável +kubectl top node my-node # Mostrar métricas para um determinado nó +kubectl cluster-info # Exibir endereços da master e serviços +kubectl cluster-info dump # Despejar o estado atual do cluster no stdout +kubectl cluster-info dump --output-directory=/path/to/cluster-state # Despejar o estado atual do cluster em /path/to/cluster-state + +# Se uma `taint` com essa chave e efeito já existir, seu valor será substituído conforme especificado. +kubectl taint nodes foo dedicated=special-user:NoSchedule +``` + +### Tipos de Recursos + +Listar todos os tipos de recursos suportados, juntamente com seus nomes abreviados, [Grupo de API](/docs/concepts/overview/kubernetes-api/#api-groups), se eles são por [namespaces](/docs/concepts/overview/working-with-objects/namespaces), e [objetos](/docs/concepts/overview/working-with-objects/kubernetes-objects): + +```bash +kubectl api-resources +``` + +Outras operações para explorar os recursos da API: + +```bash +kubectl api-resources --namespaced=true # Todos os recursos com namespace +kubectl api-resources --namespaced=false # Todos os recursos sem namespace +kubectl api-resources -o name # Todos os recursos com saída simples (apenas o nome do recurso) +kubectl api-resources -o wide # Todos os recursos com saída expandida (também conhecida como "ampla") +kubectl api-resources --verbs=list,get # Todos os recursos que suportam os verbos de API "list" e "get" +kubectl api-resources --api-group=extensions # Todos os recursos no grupo de API "extensions" +``` + +### Formatação de Saída + +Para enviar detalhes para a janela do terminal em um formato específico, adicione a flag `-o` (ou `--output`) para um comando `kubectl` suportado. + +Formato de saída | Descrição +--------------| ----------- +`-o=custom-columns=` | Imprimir uma tabela usando uma lista separada por vírgula de colunas personalizadas +`-o=custom-columns-file=` | Imprima uma tabela usando o modelo de colunas personalizadas no arquivo `` +`-o=json` | Saída de um objeto de API formatado em JSON +`-o=jsonpath=