Skip to content

Commit

Permalink
[en] update en docs to use recommended labels
Browse files Browse the repository at this point in the history
  • Loading branch information
therealdwright committed Jun 7, 2022
1 parent 28c133f commit 2a22230
Show file tree
Hide file tree
Showing 14 changed files with 138 additions and 59 deletions.
2 changes: 1 addition & 1 deletion content/en/docs/concepts/configuration/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ DNS server watches the Kubernetes API for new `Services` and creates a set of DN

## Using Labels

- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app: myapp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app: myapp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach.
- Define and use [labels](/docs/concepts/overview/working-with-objects/labels/) that identify __semantic attributes__ of your application or Deployment, such as `{ app.kubernetes.io/name: MyApp, tier: frontend, phase: test, deployment: v3 }`. You can use these labels to select the appropriate Pods for other resources; for example, a Service that selects all `tier: frontend` Pods, or all `phase: test` components of `app.kubernetes.io/name: MyApp`. See the [guestbook](https://github.com/kubernetes/examples/tree/master/guestbook/) app for examples of this approach.

A Service can be made to span multiple Deployments by omitting release-specific labels from its selector. When you need to update a running service without downtime, use a [Deployment](/docs/concepts/workloads/controllers/deployment/).

Expand Down
89 changes: 84 additions & 5 deletions content/en/docs/concepts/services-networking/dual-stack.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features:

The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters:

* Kubernetes 1.20 or later
* Kubernetes 1.20 or later

For information about using dual-stack services with earlier
Kubernetes versions, refer to the documentation for that version
Expand Down Expand Up @@ -93,9 +93,13 @@ set the `.spec.ipFamilyPolicy` field to one of the following values:
* Selects the `.spec.ClusterIP` from the list of `.spec.ClusterIPs` based on the address family
of the first element in the `.spec.ipFamilies` array.

<<<<<<< HEAD
If you would like to define which IP family to use for single stack or define the order of IP
families for dual-stack, you can choose the address families by setting an optional field,
`.spec.ipFamilies`, on the Service.
`.spec.ipFamilies`, on the Service.
=======
If you would like to define which IP family to use for single stack or define the order of IP families for dual-stack, you can choose the address families by setting an optional field, `.spec.ipFamilies`, on the Service.
>>>>>>> cca4c78633 ([en] update en docs to use recommended labels)
{{< note >}}
The `.spec.ipFamilies` field is immutable because the `.spec.ClusterIP` cannot be reallocated on a
Expand All @@ -118,6 +122,7 @@ These examples demonstrate the behavior of various dual-stack Service configurat

#### Dual-stack options on new Services

<<<<<<< HEAD
1. This Service specification does not explicitly define `.spec.ipFamilyPolicy`. When you create
this Service, Kubernetes assigns a cluster IP for the Service from the first configured
`service-cluster-ip-range` and sets the `.spec.ipFamilyPolicy` to `SingleStack`. ([Services
Expand All @@ -133,13 +138,24 @@ These examples demonstrate the behavior of various dual-stack Service configurat
address assignments. The field `.spec.ClusterIPs` is the primary field, and contains both assigned
IP addresses; `.spec.ClusterIP` is a secondary field with its value calculated from
`.spec.ClusterIPs`.

* For the `.spec.ClusterIP` field, the control plane records the IP address that is from the
same address family as the first service cluster IP range.
same address family as the first service cluster IP range.
* On a single-stack cluster, the `.spec.ClusterIPs` and `.spec.ClusterIP` fields both only list
one address.
one address.
* On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy`
behaves the same as `PreferDualStack`.
=======
1. This Service specification does not explicitly define `.spec.ipFamilyPolicy`. When you create this Service, Kubernetes assigns a cluster IP for the Service from the first configured `service-cluster-ip-range` and sets the `.spec.ipFamilyPolicy` to `SingleStack`. ([Services without selectors](/docs/concepts/services-networking/service/#services-without-selectors) and [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors will behave in this same way.)

{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}

1. This Service specification explicitly defines `PreferDualStack` in `.spec.ipFamilyPolicy`. When you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6 addresses for the service. The control plane updates the `.spec` for the Service to record the IP address assignments. The field `.spec.ClusterIPs` is the primary field, and contains both assigned IP addresses; `.spec.ClusterIP` is a secondary field with its value calculated from `.spec.ClusterIPs`.

* For the `.spec.ClusterIP` field, the control plane records the IP address that is from the same address family as the first service cluster IP range.
* On a single-stack cluster, the `.spec.ClusterIPs` and `.spec.ClusterIP` fields both only list one address.
* On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy` behaves the same as `PreferDualStack`.
>>>>>>> cca4c78633 ([en] update en docs to use recommended labels)
{{< codenew file="service/networking/dual-stack-preferred-svc.yaml" >}}

Expand All @@ -165,6 +181,7 @@ dual-stack.)

You can validate this behavior by using kubectl to inspect an existing service.

<<<<<<< HEAD
```shell
kubectl get svc my-service -o yaml
```
Expand Down Expand Up @@ -230,6 +247,68 @@ dual-stack.)
selector:
app: MyApp
```
=======
```shell
kubectl get svc my-service -o yaml
```

```yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: MyApp
name: my-service
spec:
clusterIP: 10.0.197.123
clusterIPs:
- 10.0.197.123
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app.kubernetes.io/name: MyApp
type: ClusterIP
status:
loadBalancer: {}
```

1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to `None`.

{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}

You can validate this behavior by using kubectl to inspect an existing headless service with selectors.

```shell
kubectl get svc my-service -o yaml
```

```yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: MyApp
name: my-service
spec:
clusterIP: None
clusterIPs:
- None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app.kubernetes.io/name: MyApp
```
>>>>>>> cca4c78633 ([en] update en docs to use recommended labels)

#### Switching Services between single-stack and dual-stack

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
Expand Down
42 changes: 21 additions & 21 deletions content/en/docs/concepts/services-networking/service.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ The name of a Service object must be a valid
[RFC 1035 label name](/docs/concepts/overview/working-with-objects/names#rfc-1035-label-names).

For example, suppose you have a set of Pods where each listens on TCP port 9376
and contains a label `app=MyApp`:
and contains a label `app.kubernetes.io/name=MyApp`:

```yaml
apiVersion: v1
Expand All @@ -84,15 +84,15 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
```
This specification creates a new Service object named "my-service", which
targets TCP port 9376 on any Pod with the `app=MyApp` label.
targets TCP port 9376 on any Pod with the `app.kubernetes.io/name=MyApp` label.

Kubernetes assigns this Service an IP address (sometimes called the "cluster IP"),
which is used by the Service proxies
Expand Down Expand Up @@ -126,7 +126,7 @@ spec:
ports:
- containerPort: 80
name: http-web-svc
---
apiVersion: v1
kind: Service
Expand All @@ -144,9 +144,9 @@ spec:


This works even if there is a mixture of Pods in the Service using a single
configured name, with the same network protocol available via different
port numbers. This offers a lot of flexibility for deploying and evolving
your Services. For example, you can change the port numbers that Pods expose
configured name, with the same network protocol available via different
port numbers. This offers a lot of flexibility for deploying and evolving
your Services. For example, you can change the port numbers that Pods expose
in the next version of your backend software, without breaking clients.

The default protocol for Services is TCP; you can also use any other
Expand All @@ -159,7 +159,7 @@ Each port definition can have the same `protocol`, or a different one.
### Services without selectors

Services most commonly abstract access to Kubernetes Pods thanks to the selector,
but when used with a corresponding Endpoints object and without a selector, the Service can abstract other kinds of backends,
but when used with a corresponding Endpoints object and without a selector, the Service can abstract other kinds of backends,
including ones that run outside the cluster. For example:

* You want to have an external database cluster in production, but in your
Expand Down Expand Up @@ -222,10 +222,10 @@ In the example above, traffic is routed to the single endpoint defined in
the YAML: `192.0.2.42:9376` (TCP).

{{< note >}}
The Kubernetes API server does not allow proxying to endpoints that are not mapped to
pods. Actions such as `kubectl proxy <service-name>` where the service has no
selector will fail due to this constraint. This prevents the Kubernetes API server
from being used as a proxy to endpoints the caller may not be authorized to access.
The Kubernetes API server does not allow proxying to endpoints that are not mapped to
pods. Actions such as `kubectl proxy <service-name>` where the service has no
selector will fail due to this constraint. This prevents the Kubernetes API server
from being used as a proxy to endpoints the caller may not be authorized to access.
{{< /note >}}

An ExternalName Service is a special case of Service that does not have
Expand Down Expand Up @@ -289,7 +289,7 @@ There are a few reasons for using proxying for Services:

Later in this page you can read about various kube-proxy implementations work. Overall,
you should note that, when running `kube-proxy`, kernel level rules may be
modified (for example, iptables rules might get created), which won't get cleaned up,
modified (for example, iptables rules might get created), which won't get cleaned up,
in some cases until you reboot. Thus, running kube-proxy is something that should
only be done by an administrator which understands the consequences of having a
low level, privileged network proxying service on a computer. Although the `kube-proxy`
Expand Down Expand Up @@ -418,7 +418,7 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- name: http
protocol: TCP
Expand Down Expand Up @@ -650,7 +650,7 @@ metadata:
spec:
type: NodePort
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
Expand All @@ -676,7 +676,7 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
Expand Down Expand Up @@ -744,13 +744,13 @@ You must explicitly remove the `nodePorts` entry in every Service port to de-all
`spec.loadBalancerClass` enables you to use a load balancer implementation other than the cloud provider default.
By default, `spec.loadBalancerClass` is `nil` and a `LoadBalancer` type of Service uses
the cloud provider's default load balancer implementation if the cluster is configured with
a cloud provider using the `--cloud-provider` component flag.
a cloud provider using the `--cloud-provider` component flag.
If `spec.loadBalancerClass` is specified, it is assumed that a load balancer
implementation that matches the specified class is watching for Services.
Any default load balancer implementation (for example, the one provided by
the cloud provider) will ignore Services that have this field set.
`spec.loadBalancerClass` can be set on a Service of type `LoadBalancer` only.
Once set, it cannot be changed.
Once set, it cannot be changed.
The value of `spec.loadBalancerClass` must be a label-style identifier,
with an optional prefix such as "`internal-vip`" or "`example.com/internal-vip`".
Unprefixed names are reserved for end-users.
Expand Down Expand Up @@ -1033,7 +1033,7 @@ There are other annotations to manage Classic Elastic Load Balancers that are de

service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f"
# A list of existing security groups to be configured on the ELB created. Unlike the annotation
# service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other security groups previously assigned to the ELB and also overrides the creation
# service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other security groups previously assigned to the ELB and also overrides the creation
# of a uniquely generated security group for this ELB.
# The first security group ID on this list is used as a source to permit incoming traffic to target worker nodes (service traffic and health checks).
# If multiple ELBs are configured with the same security group ID, only a single permit line will be added to the worker node security groups, that means if you delete any
Expand All @@ -1043,7 +1043,7 @@ There are other annotations to manage Classic Elastic Load Balancers that are de
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e"
# A list of additional security groups to be added to the created ELB, this leaves the uniquely generated security group in place, this ensures that every ELB
# has a unique security group ID and a matching permit line to allow traffic to the target worker nodes (service traffic and health checks).
# Security groups defined here can be shared between services.
# Security groups defined here can be shared between services.

service.beta.kubernetes.io/aws-load-balancer-target-node-labels: "ingress-gw,gw-name=public-api"
# A comma separated list of key-value pairs which are used
Expand Down Expand Up @@ -1212,7 +1212,7 @@ metadata:
name: my-service
spec:
selector:
app: MyApp
app.kubernetes.io/name: MyApp
ports:
- name: http
protocol: TCP
Expand Down
6 changes: 3 additions & 3 deletions content/en/docs/concepts/workloads/pods/init-containers.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Init containers are exactly like regular containers, except:
* Init containers always run to completion.
* Each init container must complete successfully before the next one starts.

If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds.
If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds.
However, if the Pod has a `restartPolicy` of Never, and an init container fails during startup of that Pod, Kubernetes treats the overall Pod as failed.

To specify an init container for a Pod, add the `initContainers` field into
Expand Down Expand Up @@ -115,7 +115,7 @@ kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
app.kubernetes.io/name: MyApp
spec:
containers:
- name: myapp-container
Expand Down Expand Up @@ -159,7 +159,7 @@ The output is similar to this:
Name: myapp-pod
Namespace: default
[...]
Labels: app=myapp
Labels: app.kubernetes.io/name=MyApp
Status: Pending
[...]
Init Containers:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,11 @@ This task shows you how to debug a StatefulSet.

## Debugging a StatefulSet

In order to list all the pods which belong to a StatefulSet, which have a label `app=myapp` set on them,
In order to list all the pods which belong to a StatefulSet, which have a label `app.kubernetes.io/name=MyApp` set on them,
you can use the following:

```shell
kubectl get pods -l app=myapp
kubectl get pods -l app.kubernetes.io/name=MyApp
```

If you find that any Pods listed are in `Unknown` or `Terminating` state for an extended period of time,
Expand Down
Loading

0 comments on commit 2a22230

Please sign in to comment.