From b786d89c9057b8f2d1bf4904515953eb8142ecb0 Mon Sep 17 00:00:00 2001 From: Martin Dietze Date: Wed, 14 Feb 2018 17:08:35 +0100 Subject: [PATCH 001/117] HA guide for kubeadm: - tabified section on load balancing - added tab for keepalived configuration --- docs/setup/independent/high-availability.md | 84 ++++++++++++++++++++- 1 file changed, 83 insertions(+), 1 deletion(-) diff --git a/docs/setup/independent/high-availability.md b/docs/setup/independent/high-availability.md index c4c6b72d8ccfb..6e1b6b1a80986 100644 --- a/docs/setup/independent/high-availability.md +++ b/docs/setup/independent/high-availability.md @@ -385,7 +385,14 @@ Please select one of the tabs to see installation instructions for the respectiv ## Set up master Load Balancer -The next step is to create a Load Balancer that sits in front of your master nodes. How you do this depends on your environment; you could, for example, leverage a cloud provider Load Balancer, or set up your own using nginx, keepalived, or HAproxy. Some examples of cloud provider solutions are: +The next step is to create a Load Balancer that sits in front of your master nodes. How you do this depends on your environment; you could, for example, leverage a cloud provider Load Balancer, or set up your own using nginx, keepalived, or HAproxy. + +{% capture choose %} +Please select one of the tabs to see installation instructions for information on load balancing in the respective environment. +{% endcapture %} + +{% capture cloud %} +Some examples of cloud provider solutions are: * [AWS Elastic Load Balancer](https://aws.amazon.com/elasticloadbalancing/) * [GCE Load Balancing](https://cloud.google.com/compute/docs/load-balancing/) @@ -394,6 +401,81 @@ The next step is to create a Load Balancer that sits in front of your master nod You will need to ensure that the load balancer routes to **just `master0` on port 6443**. This is because kubeadm will perform health checks using the load balancer IP. Since `master0` is set up individually first, the other masters will not have running apiservers, which will result in kubeadm hanging indefinitely. If possible, use a smart load balancing algorithm like "least connections", and use health checks so unhealthy nodes can be removed from circulation. Most providers will provide these features. +{% endcapture %} + +{% capture onsite %} +In an on-site environment there may not be a physical load balancer available. Instead, keepalived can be used to setup a virtual IP pointing to a healthy master node. The configuration shown here provides an _active/passive_ setup rather than _real_ load balancing, but it can be extended for this purpose quite easily by setting up HAProxy, nginx or similar on the master nodes (not covered here). + +1. Install keepalived, e.g. using your distribution's package manager. The configuration shown here works with version 1.3.5 and supposedly many others. Make sure to have it enabled (chkconfig, systemd, ...) so that it starts automatically when the respective node comes up. + +2. Create the following configuration file _/etc/keepalived/keepalived.conf_ on all master nodes: + + ```shell + ! Configuration File for keepalived + global_defs { + router_id LVS_DEVEL + } + + vrrp_script check_apiserver { + script "/etc/keepalived/check_apiserver.sh" + interval 3 + weight -2 + fall 10 + rise 2 + } + + vrrp_instance VI_1 { + state + interface + virtual_router_id 51 + priority + authentication { + auth_type PASS + auth_pass 4be37dc3b4c90194d1600c483e10ad1d + } + virtual_ipaddress { + + } + track_script { + check_apiserver + } + } + ``` + + In the section `vrrp_instance VI_1`, change few lines depending on your setup: + + * `state` is either `MASTER` (on the first master nodes) or `BACKUP` (the other master nodes). + * `interface` is the name of an existing public interface to bind the virtual IP to (usually the primary interface). + * `priority` should be higher for the first master node, e.g. 101, and lower for the others, e.g. 100. + * `auth_pass` use any random string here. + * `virtual_ipaddresses` should contain the virtual IP for the master nodes. + +3. Install the following health check script to _/etc/keepalived/check_apiserver.sh_ on all master nodes: + + ```shell + #!/bin/sh + + errorExit() { + echo "*** $*" 1>&2 + exit 1 + } + + curl --silent --max-time 2 --insecure https://localhost:6443/ -o /dev/null || errorExit "Error GET https://localhost:6443/" + if ip addr | grep -q ; then + curl --silent --max-time 2 --insecure https://:6443/ -o /dev/null || errorExit "Error GET https://:6443/" + fi + ``` + + Replace the `` by your chosen virtual IP. + +4. Restart keepalived. While no Kubernetes services are up yet it will log health check fails on all master nodes. This will stop as soon as the first master node has been bootstrapped. + +{% endcapture %} + +{% assign tab_names = "Choose one...,Cloud,On-Site" | split: ',' | compact %} +{% assign tab_contents = site.emptyArray | push: choose | push: cloud | push: onsite %} + +{% include tabs.md %} ## Acquire etcd certs From 7c32550bb72ebd8114f456beccb2e883709ab37b Mon Sep 17 00:00:00 2001 From: Martin Dietze Date: Wed, 14 Feb 2018 18:51:18 +0100 Subject: [PATCH 002/117] HA guide for kubeadm: fixed tab navigation for load balancer setup. --- docs/setup/independent/high-availability.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/setup/independent/high-availability.md b/docs/setup/independent/high-availability.md index 6e1b6b1a80986..e9ce05c1d1513 100644 --- a/docs/setup/independent/high-availability.md +++ b/docs/setup/independent/high-availability.md @@ -378,6 +378,7 @@ Please select one of the tabs to see installation instructions for the respectiv {% endcapture %} +{% assign tab_set_name = "etcd_mode" %} {% assign tab_names = "Choose one...,systemd,Static Pods" | split: ',' | compact %} {% assign tab_contents = site.emptyArray | push: choose | push: systemd | push: static_pods %} @@ -472,6 +473,7 @@ In an on-site environment there may not be a physical load balancer available. I {% endcapture %} +{% assign tab_set_name = "lb_mode" %} {% assign tab_names = "Choose one...,Cloud,On-Site" | split: ',' | compact %} {% assign tab_contents = site.emptyArray | push: choose | push: cloud | push: onsite %} From 747d8037220e444ab226efdac186c81b9d49a9d9 Mon Sep 17 00:00:00 2001 From: Martin Dietze Date: Thu, 22 Feb 2018 11:53:12 +0100 Subject: [PATCH 003/117] HA guide for kubeadm: text change to highlight the fact that keepalived is not the prescribed solution to setting up a virtual IP --- docs/setup/independent/high-availability.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/docs/setup/independent/high-availability.md b/docs/setup/independent/high-availability.md index e9ce05c1d1513..c78673783f5bc 100644 --- a/docs/setup/independent/high-availability.md +++ b/docs/setup/independent/high-availability.md @@ -386,7 +386,7 @@ Please select one of the tabs to see installation instructions for the respectiv ## Set up master Load Balancer -The next step is to create a Load Balancer that sits in front of your master nodes. How you do this depends on your environment; you could, for example, leverage a cloud provider Load Balancer, or set up your own using nginx, keepalived, or HAproxy. +The next step is to create a Load Balancer that sits in front of your master nodes. How you do this depends on your environment; you could, for example, leverage a cloud provider Load Balancer, or set up your own using NGINX, keepalived, or HAproxy. {% capture choose %} Please select one of the tabs to see installation instructions for information on load balancing in the respective environment. @@ -405,7 +405,9 @@ If possible, use a smart load balancing algorithm like "least connections", and {% endcapture %} {% capture onsite %} -In an on-site environment there may not be a physical load balancer available. Instead, keepalived can be used to setup a virtual IP pointing to a healthy master node. The configuration shown here provides an _active/passive_ setup rather than _real_ load balancing, but it can be extended for this purpose quite easily by setting up HAProxy, nginx or similar on the master nodes (not covered here). +In an on-site environment there may not be a physical load balancer available. Instead, a virtual IP pointing to a healthy master node can be used. There are a number of solutions for this including keepalived, Pacemaker and probably many others, some with and some without load balancing. + +As an example we outline a simple setup based on keepalived. Depending on environment and requirements people may prefer different solutions. The configuration shown here provides an _active/passive_ failover without load balancing. If required, load balancing can by added quite easily by setting up HAProxy, NGINX or similar on the master nodes (not covered in this guide). 1. Install keepalived, e.g. using your distribution's package manager. The configuration shown here works with version 1.3.5 and supposedly many others. Make sure to have it enabled (chkconfig, systemd, ...) so that it starts automatically when the respective node comes up. From 773def6cdc13c2d5f25d7bfe89d11fbd17622ff8 Mon Sep 17 00:00:00 2001 From: Martin Dietze Date: Thu, 22 Feb 2018 15:46:29 +0100 Subject: [PATCH 004/117] High Availability guide for kubeadm: text change as proposed by @mattkelly. --- docs/setup/independent/high-availability.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/setup/independent/high-availability.md b/docs/setup/independent/high-availability.md index c78673783f5bc..b75826675cfa5 100644 --- a/docs/setup/independent/high-availability.md +++ b/docs/setup/independent/high-availability.md @@ -409,7 +409,7 @@ In an on-site environment there may not be a physical load balancer available. I As an example we outline a simple setup based on keepalived. Depending on environment and requirements people may prefer different solutions. The configuration shown here provides an _active/passive_ failover without load balancing. If required, load balancing can by added quite easily by setting up HAProxy, NGINX or similar on the master nodes (not covered in this guide). -1. Install keepalived, e.g. using your distribution's package manager. The configuration shown here works with version 1.3.5 and supposedly many others. Make sure to have it enabled (chkconfig, systemd, ...) so that it starts automatically when the respective node comes up. +1. Install keepalived, e.g. using your distribution's package manager. The configuration shown here works with version `1.3.5` but is expected to work with may other versions. Make sure to have it enabled (chkconfig, systemd, ...) so that it starts automatically when the respective node comes up. 2. Create the following configuration file _/etc/keepalived/keepalived.conf_ on all master nodes: From aa0d8f1e826b60d3de5aef71d2dcef66b6442aa2 Mon Sep 17 00:00:00 2001 From: Srini Brahmaroutu Date: Fri, 23 Feb 2018 09:06:45 -0800 Subject: [PATCH 005/117] Added glossary term for Container environment variable (#7471) --- _data/glossary/container-env-variables.yaml | 9 +++++++++ 1 file changed, 9 insertions(+) create mode 100644 _data/glossary/container-env-variables.yaml diff --git a/_data/glossary/container-env-variables.yaml b/_data/glossary/container-env-variables.yaml new file mode 100644 index 0000000000000..ea1fbd2260f7f --- /dev/null +++ b/_data/glossary/container-env-variables.yaml @@ -0,0 +1,9 @@ +id: container-env-variables +name: Container Environment Variables +full-link: /docs/concepts/containers/container-environment-variables.md +tags: +- fundamental +short-description: > + Container environment variables are name/value pairs that provide useful information into containers running in a Pod. +long-description: > + Container environment variables provide information that is required by the running containerized applications along with information about important resources to the [Containers] {% glossary_tooltip text="Containers" term_id="container" %}. For example, file system, information about the container itself and other cluster resources such as service endpoints, etc. From 34c656f32adc1c3f87230cec0c45f18beaee7869 Mon Sep 17 00:00:00 2001 From: Sean Dague Date: Fri, 23 Feb 2018 10:01:45 -0800 Subject: [PATCH 006/117] glossary: move labels to label (#7486) The term_id for Label is singular, but the file is plural. All the rest of the terms in the glossary prefer singular for the glossary reference. Harmonizing the label entry is helpful for consistency. Update all in tree tooltip references at the same time. --- _data/glossary/{labels.yaml => label.yaml} | 0 _data/glossary/selector.yaml | 2 +- docs/user-journeys/users/application-developer/advanced.md | 2 +- .../user-journeys/users/application-developer/foundational.md | 4 ++-- 4 files changed, 4 insertions(+), 4 deletions(-) rename _data/glossary/{labels.yaml => label.yaml} (100%) diff --git a/_data/glossary/labels.yaml b/_data/glossary/label.yaml similarity index 100% rename from _data/glossary/labels.yaml rename to _data/glossary/label.yaml diff --git a/_data/glossary/selector.yaml b/_data/glossary/selector.yaml index 6925dd9f7d2dc..e8e7a7b72d52d 100644 --- a/_data/glossary/selector.yaml +++ b/_data/glossary/selector.yaml @@ -9,5 +9,5 @@ short-description: > Allows users to filter a list of resources based on labels. long-description: > Selectors are applied when querying lists of resources to filter - them by {% glossary_tooltip text="Labels" term_id="labels" + them by {% glossary_tooltip text="Labels" term_id="label" %}. diff --git a/docs/user-journeys/users/application-developer/advanced.md b/docs/user-journeys/users/application-developer/advanced.md index 9d5b0c2143d92..401de698dcf87 100644 --- a/docs/user-journeys/users/application-developer/advanced.md +++ b/docs/user-journeys/users/application-developer/advanced.md @@ -34,7 +34,7 @@ As you may know, it's an antipattern to migrate an entire app (e.g. containerize #### Pod configuration -Usually, you use {% glossary_tooltip text="labels" term_id="labels" %} and {% glossary_tooltip text="annotations" term_id="annotation" %} to attach metadata to your resources. To inject data into your resources, you'd likely create {% glossary_tooltip text="ConfigMaps" term_id="configmap" %} (for nonconfidential data) or {% glossary_tooltip text="Secrets" term_id="secret" %} (for confidential data). +Usually, you use {% glossary_tooltip text="labels" term_id="label" %} and {% glossary_tooltip text="annotations" term_id="annotation" %} to attach metadata to your resources. To inject data into your resources, you'd likely create {% glossary_tooltip text="ConfigMaps" term_id="configmap" %} (for nonconfidential data) or {% glossary_tooltip text="Secrets" term_id="secret" %} (for confidential data). Below are some other, lesser-known ways of configuring your resources' Pods: diff --git a/docs/user-journeys/users/application-developer/foundational.md b/docs/user-journeys/users/application-developer/foundational.md index 9a5f3eb636ea2..fd18b2c9fc150 100644 --- a/docs/user-journeys/users/application-developer/foundational.md +++ b/docs/user-journeys/users/application-developer/foundational.md @@ -71,7 +71,7 @@ Through these deployment tasks, you'll gain familiarity with the following: * Common workload objects * **{% glossary_tooltip text="Deployment" term_id="deployment" %}** - The most common way of running *X* copies (Pods) of your application. Supports rolling updates to your container images. - * **{% glossary_tooltip text="Service" term_id="deployment" %}** - By itself, a Deployment can't receive traffic. Setting up a Service is one of the simplest ways to configure a Deployment to receive and loadbalance requests. Depending on the `type` of Service used, these requests can come from external client apps or be limited to apps within the same cluster. A Service is tied to a specific Deployment using {% glossary_tooltip text="label" term_id="labels" %} selection. + * **{% glossary_tooltip text="Service" term_id="deployment" %}** - By itself, a Deployment can't receive traffic. Setting up a Service is one of the simplest ways to configure a Deployment to receive and loadbalance requests. Depending on the `type` of Service used, these requests can come from external client apps or be limited to apps within the same cluster. A Service is tied to a specific Deployment using {% glossary_tooltip text="label" term_id="label" %} selection. The subsequent topics are also useful to know for basic application deployment. @@ -79,7 +79,7 @@ The subsequent topics are also useful to know for basic application deployment. You can also specify custom information about your Kubernetes API objects by attaching key/value fields. Kubernetes provides two ways of doing this: -* **{% glossary_tooltip text="Labels" term_id="labels" %}** - Identifying metadata that you can use to sort and select sets of API objects. Labels have many applications, including the following: +* **{% glossary_tooltip text="Labels" term_id="label" %}** - Identifying metadata that you can use to sort and select sets of API objects. Labels have many applications, including the following: * *To keep the right number of replicas (Pods) running in a Deployment.* The specified label (`app: nginx` in the [stateless app example](/docs/tasks/run-application/run-stateless-application-deployment/#creating-and-exploring-an-nginx-deployment){:target="_blank"}) is used to stamp the Deployment's newly created Pods (as the value of the `spec.template.labels` configuration field), and to query which Pods it already manages (as the value of `spec.selector.matchLabels`). From 5ec511844535bbffa8c7fdea0f1044160f1a6454 Mon Sep 17 00:00:00 2001 From: Joel Smith Date: Fri, 23 Feb 2018 11:06:45 -0700 Subject: [PATCH 007/117] Don't encourage people to mount downwardAPI volumes on /etc (#7484) Because API data volumes like downwardAPI are expected to be fully managed by Kubernetes and are now mounted read-only, this causes problems with other files in /etc like /etc/resolv.conf that Docker tries to add to the volume. Our examples should show such volumes being mounted to a dedicated subdirectory for the volume. --- .../dapi-volume-resources.yaml | 12 ++++++------ .../inject-data-application/dapi-volume.yaml | 10 +++++----- .../dapi-volume-resources.yaml | 18 +++++++++--------- .../inject-data-application/dapi-volume.yaml | 10 +++++----- 4 files changed, 25 insertions(+), 25 deletions(-) diff --git a/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml b/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml index 55af44ac1b97b..07bebfb47b5d0 100644 --- a/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml +++ b/cn/docs/tasks/inject-data-application/dapi-volume-resources.yaml @@ -10,14 +10,14 @@ spec: args: - while true; do echo -en '\n'; - if [[ -e /etc/cpu_limit ]]; then - echo -en '\n'; cat /etc/cpu_limit; fi; + if [[ -e /etc/podinfo/cpu_limit ]]; then + echo -en '\n'; cat /etc/podinfo/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then - echo -en '\n'; cat /etc/cpu_request; fi; + echo -en '\n'; cat /etc/podinfo/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then - echo -en '\n'; cat /etc/mem_limit; fi; + echo -en '\n'; cat /etc/podinfo/mem_limit; fi; if [[ -e /etc/mem_request ]]; then - echo -en '\n'; cat /etc/mem_request; fi; + echo -en '\n'; cat /etc/podinfo/mem_request; fi; sleep 5; done; resources: @@ -29,7 +29,7 @@ spec: cpu: "250m" volumeMounts: - name: podinfo - mountPath: /etc + mountPath: /etc/podinfo readOnly: false volumes: - name: podinfo diff --git a/cn/docs/tasks/inject-data-application/dapi-volume.yaml b/cn/docs/tasks/inject-data-application/dapi-volume.yaml index 864c99d11e01f..e7515afba5829 100644 --- a/cn/docs/tasks/inject-data-application/dapi-volume.yaml +++ b/cn/docs/tasks/inject-data-application/dapi-volume.yaml @@ -16,15 +16,15 @@ spec: command: ["sh", "-c"] args: - while true; do - if [[ -e /etc/labels ]]; then - echo -en '\n\n'; cat /etc/labels; fi; - if [[ -e /etc/annotations ]]; then - echo -en '\n\n'; cat /etc/annotations; fi; + if [[ -e /etc/podinfo/labels ]]; then + echo -en '\n\n'; cat /etc/podinfo/labels; fi; + if [[ -e /etc/podinfo/annotations ]]; then + echo -en '\n\n'; cat /etc/podinfo/annotations; fi; sleep 5; done; volumeMounts: - name: podinfo - mountPath: /etc + mountPath: /etc/podinfo readOnly: false volumes: - name: podinfo diff --git a/docs/tasks/inject-data-application/dapi-volume-resources.yaml b/docs/tasks/inject-data-application/dapi-volume-resources.yaml index 55af44ac1b97b..e357e5a3360da 100644 --- a/docs/tasks/inject-data-application/dapi-volume-resources.yaml +++ b/docs/tasks/inject-data-application/dapi-volume-resources.yaml @@ -10,14 +10,14 @@ spec: args: - while true; do echo -en '\n'; - if [[ -e /etc/cpu_limit ]]; then - echo -en '\n'; cat /etc/cpu_limit; fi; - if [[ -e /etc/cpu_request ]]; then - echo -en '\n'; cat /etc/cpu_request; fi; - if [[ -e /etc/mem_limit ]]; then - echo -en '\n'; cat /etc/mem_limit; fi; - if [[ -e /etc/mem_request ]]; then - echo -en '\n'; cat /etc/mem_request; fi; + if [[ -e /etc/podinfo/cpu_limit ]]; then + echo -en '\n'; cat /etc/podinfo/cpu_limit; fi; + if [[ -e /etc/podinfo/cpu_request ]]; then + echo -en '\n'; cat /etc/podinfo/cpu_request; fi; + if [[ -e /etc/podinfo/mem_limit ]]; then + echo -en '\n'; cat /etc/podinfo/mem_limit; fi; + if [[ -e /etc/podinfo/mem_request ]]; then + echo -en '\n'; cat /etc/podinfo/mem_request; fi; sleep 5; done; resources: @@ -29,7 +29,7 @@ spec: cpu: "250m" volumeMounts: - name: podinfo - mountPath: /etc + mountPath: /etc/podinfo readOnly: false volumes: - name: podinfo diff --git a/docs/tasks/inject-data-application/dapi-volume.yaml b/docs/tasks/inject-data-application/dapi-volume.yaml index 864c99d11e01f..e7515afba5829 100644 --- a/docs/tasks/inject-data-application/dapi-volume.yaml +++ b/docs/tasks/inject-data-application/dapi-volume.yaml @@ -16,15 +16,15 @@ spec: command: ["sh", "-c"] args: - while true; do - if [[ -e /etc/labels ]]; then - echo -en '\n\n'; cat /etc/labels; fi; - if [[ -e /etc/annotations ]]; then - echo -en '\n\n'; cat /etc/annotations; fi; + if [[ -e /etc/podinfo/labels ]]; then + echo -en '\n\n'; cat /etc/podinfo/labels; fi; + if [[ -e /etc/podinfo/annotations ]]; then + echo -en '\n\n'; cat /etc/podinfo/annotations; fi; sleep 5; done; volumeMounts: - name: podinfo - mountPath: /etc + mountPath: /etc/podinfo readOnly: false volumes: - name: podinfo From d432d6bafeaa36cb05709e613fd0d57a8b78a784 Mon Sep 17 00:00:00 2001 From: Manuel Alejandro de Brito Fontes Date: Fri, 23 Feb 2018 15:11:45 -0300 Subject: [PATCH 008/117] Update cassandra tutorial (#7478) --- docs/tutorials/stateful-application/cassandra.md | 4 ++-- .../cassandra/cassandra-statefulset.yaml | 10 ++++++---- 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/docs/tutorials/stateful-application/cassandra.md b/docs/tutorials/stateful-application/cassandra.md index bda194783d352..9f75b40c1dd1e 100644 --- a/docs/tutorials/stateful-application/cassandra.md +++ b/docs/tutorials/stateful-application/cassandra.md @@ -11,9 +11,9 @@ Deploying stateful distributed applications, like Cassandra, within a clustered **Cassandra Docker** -The Pods use the [`gcr.io/google-samples/cassandra:v12`](https://github.com/kubernetes/examples/blob/master/cassandra/image/Dockerfile) +The Pods use the [`gcr.io/google-samples/cassandra:v13`](https://github.com/kubernetes/examples/blob/master/cassandra/image/Dockerfile) image from Google's [container registry](https://cloud.google.com/container-registry/docs/). -The docker is based on `debian:jessie` and includes OpenJDK 8. This image includes a standard Cassandra installation from the Apache Debian repo. By using environment variables you can change values that are inserted into `cassandra.yaml`. +The docker image above is based on [debian-base](https://github.com/kubernetes/kubernetes/tree/master/build/debian-base) and includes OpenJDK 8. This image includes a standard Cassandra installation from the Apache Debian repo. By using environment variables you can change values that are inserted into `cassandra.yaml`. | ENV VAR | DEFAULT VALUE | | ------------- |:-------------: | diff --git a/docs/tutorials/stateful-application/cassandra/cassandra-statefulset.yaml b/docs/tutorials/stateful-application/cassandra/cassandra-statefulset.yaml index dbcba6cbc72c2..24d6ff208dde8 100644 --- a/docs/tutorials/stateful-application/cassandra/cassandra-statefulset.yaml +++ b/docs/tutorials/stateful-application/cassandra/cassandra-statefulset.yaml @@ -15,9 +15,10 @@ spec: labels: app: cassandra spec: + terminationGracePeriodSeconds: 1800 containers: - name: cassandra - image: gcr.io/google-samples/cassandra:v12 + image: gcr.io/google-samples/cassandra:v13 imagePullPolicy: Always ports: - containerPort: 7000 @@ -42,7 +43,10 @@ spec: lifecycle: preStop: exec: - command: ["/bin/sh", "-c", "PID=$(pidof java) && kill $PID && while ps -p $PID > /dev/null; do sleep 1; done"] + command: + - /bin/sh + - -c + - nodetool drain env: - name: MAX_HEAP_SIZE value: 512M @@ -56,8 +60,6 @@ spec: value: "DC1-K8Demo" - name: CASSANDRA_RACK value: "Rack1-K8Demo" - - name: CASSANDRA_AUTO_BOOTSTRAP - value: "false" - name: POD_IP valueFrom: fieldRef: From 086251e59377bf3fda02ace6c5698a9067a0b739 Mon Sep 17 00:00:00 2001 From: Ayush Pateria Date: Fri, 23 Feb 2018 23:43:45 +0530 Subject: [PATCH 009/117] Update cassandra-statefulset.yaml (#7438) Specifying storage class name using annotations is deprecated since v1.6. Updating it to storageClassName field. --- .../stateful-application/cassandra/cassandra-statefulset.yaml | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/tutorials/stateful-application/cassandra/cassandra-statefulset.yaml b/docs/tutorials/stateful-application/cassandra/cassandra-statefulset.yaml index 24d6ff208dde8..ee8a604caf21b 100644 --- a/docs/tutorials/stateful-application/cassandra/cassandra-statefulset.yaml +++ b/docs/tutorials/stateful-application/cassandra/cassandra-statefulset.yaml @@ -84,10 +84,9 @@ spec: volumeClaimTemplates: - metadata: name: cassandra-data - annotations: - volume.beta.kubernetes.io/storage-class: fast spec: accessModes: [ "ReadWriteOnce" ] + storageClassName: fast resources: requests: storage: 1Gi From e200579d01b516643e5239b729fabb9ad655f2eb Mon Sep 17 00:00:00 2001 From: "Jorge O. Castro" Date: Fri, 23 Feb 2018 13:20:45 -0500 Subject: [PATCH 010/117] Replace outdated instructions with the maintained ones. (#7472) --- community/index.html | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/community/index.html b/community/index.html index 43225786a1fe7..8e901a2374bc8 100644 --- a/community/index.html +++ b/community/index.html @@ -14,9 +14,9 @@

Community

Ensuring Kubernetes works well everywhere and for everyone.

Connect with the Kubernetes community on our Slack channel or join the Kubernetes-dev Google group. A weekly - community meeting takes place via video conference to discuss the state of affairs, - get a calendar invite - to participate.

+ community meeting takes place via video conference to discuss the state of affairs, see + these instructions for information + on how to participate.

You can also join Kubernetes all around the world through our Kubernetes Meetup Community and the Kubernetes Cloud Native Meetup Community.

From 3885c80063e7b3a0ab0ede00034b115751e44cb5 Mon Sep 17 00:00:00 2001 From: Mike Wilson Date: Fri, 23 Feb 2018 13:32:44 -0500 Subject: [PATCH 011/117] Add to contact methods (#7494) Add kubernetes-users and kubernetes-novice to contact methods --- docs/getting-started-guides/ubuntu/index.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/getting-started-guides/ubuntu/index.md b/docs/getting-started-guides/ubuntu/index.md index c851f32a6e71b..8bec8f928427f 100644 --- a/docs/getting-started-guides/ubuntu/index.md +++ b/docs/getting-started-guides/ubuntu/index.md @@ -59,6 +59,8 @@ These are more in-depth guides for users choosing to run Kubernetes in productio We're normally following the following Slack channels: +- [kubernetes-users](https://kubernetes.slack.com/messages/kubernetes-users/) +- [kubernetes-novice](https://kubernetes.slack.com/messages/kubernetes-novice/) - [sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/) - [sig-cluster-ops](https://kubernetes.slack.com/messages/sig-cluster-ops/) - [sig-onprem](https://kubernetes.slack.com/messages/sig-onprem/) From 9e56ea6c6758d65902f57e040444ca2fe163e2f2 Mon Sep 17 00:00:00 2001 From: Kai Chen Date: Fri, 23 Feb 2018 13:01:46 -0800 Subject: [PATCH 012/117] Fix reference to kubernetes-objects (#7499) --- docs/user-journeys/users/cluster-operator/foundational.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/user-journeys/users/cluster-operator/foundational.md b/docs/user-journeys/users/cluster-operator/foundational.md index 581a1ae6c33d1..66148a7da5192 100644 --- a/docs/user-journeys/users/cluster-operator/foundational.md +++ b/docs/user-journeys/users/cluster-operator/foundational.md @@ -44,7 +44,7 @@ Katacoda provides a browser-based connection to a single-node cluster, using min You interact with Kubernetes either through a dashboard, an API, or using a command-line tool (such as `kubectl`) that interacts with the Kubernetes API. Be familiar with [Organizing Cluster Access](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) by using configuration files. The Kubernetes API exposes a number of resources that provide the building blocks and abstractions that are used to run software on Kubernetes. -Learn more about these resources at [Understanding Kubernetes Objects](/docs/concepts/overview/kubernetes-objects). +Learn more about these resources at [Understanding Kubernetes Objects](/docs/concepts/overview/working-with-objects/kubernetes-objects). These resources are covered in a number of articles within the Kubernetes documentation. * [Pod Overview](/docs/concepts/workloads/pods/pod-overview/) From 57ba29877daa2281dec706b743059ed1c2ee5975 Mon Sep 17 00:00:00 2001 From: Thomas Maddox Date: Sat, 24 Feb 2018 11:31:48 -0600 Subject: [PATCH 013/117] custom-resources.md: Swap "You are have"/"You have" (#7500) Minor grammatical error when describing what CRDs are a good fit for. --- docs/concepts/api-extension/custom-resources.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/api-extension/custom-resources.md b/docs/concepts/api-extension/custom-resources.md index d602cd51cd38c..fe7f06f6034d8 100644 --- a/docs/concepts/api-extension/custom-resources.md +++ b/docs/concepts/api-extension/custom-resources.md @@ -134,7 +134,7 @@ CRDs are easier to use. Aggregated APIs are more flexible. Choose the method tha Typically, CRDs are a good fit if: -* You are have a handful of fields +* You have a handful of fields * You are using the resource within your company, or as part of a small open-source project (as opposed to a commercial product) #### Comparing ease of use From d1b446a0801b9a7b55507aa365e71712e2bdf480 Mon Sep 17 00:00:00 2001 From: Ozioma Date: Sat, 24 Feb 2018 18:45:45 +0100 Subject: [PATCH 014/117] edit command to create config file (#7293) "touch" is not a windows or power shell command --- docs/tasks/tools/install-kubectl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tasks/tools/install-kubectl.md b/docs/tasks/tools/install-kubectl.md index 9d531ba34e36c..0c583f7372163 100644 --- a/docs/tasks/tools/install-kubectl.md +++ b/docs/tasks/tools/install-kubectl.md @@ -137,7 +137,7 @@ re-run install-kubectl.ps1 to install latest binaries cd C:\users\yourusername (Or wherever your %HOME% directory is) mkdir .kube cd .kube - touch config + New-Item config -type file Edit the config file with a text editor of your choice, such as Notepad for example. From 8d7accb1a947978282b69794b70c608cc094ea35 Mon Sep 17 00:00:00 2001 From: Nikhita Raghunath Date: Sun, 25 Feb 2018 02:29:44 +0530 Subject: [PATCH 015/117] Document ability to do object count quota for all namespaced resources (#7441) --- docs/concepts/policy/resource-quotas.md | 78 +++++++++++++++++++------ 1 file changed, 61 insertions(+), 17 deletions(-) diff --git a/docs/concepts/policy/resource-quotas.md b/docs/concepts/policy/resource-quotas.md index 11a51a76f2533..fa12317377e3c 100644 --- a/docs/concepts/policy/resource-quotas.md +++ b/docs/concepts/policy/resource-quotas.md @@ -47,8 +47,7 @@ enabled when the apiserver `--admission-control=` flag has `ResourceQuota` as one of its arguments. Resource Quota is enforced in a particular namespace when there is a -`ResourceQuota` object in that namespace. There should be at most one -`ResourceQuota` object in a namespace. +`ResourceQuota` object in that namespace. ## Compute Resource Quota @@ -93,8 +92,34 @@ In release 1.8, quota support for local ephemeral storage is added as alpha feat ## Object Count Quota -The number of objects of a given type can be restricted. The following types -are supported: +The 1.9 release added support to quota all standard namespaced resource types using the following syntax: + +* `count/.` + +Here is an example set of resources users may want to put under object count quota: + +* `count/persistentvolumeclaims` +* `count/services` +* `count/secrets` +* `count/configmaps` +* `count/replicationcontrollers` +* `count/deployments.apps` +* `count/replicasets.apps` +* `count/statefulsets.apps` +* `count/jobs.batch` +* `count/cronjobs.batch` +* `count/deployments.extensions` + +When using `count/*` resource quota, an object is charged against the quota if it exists in server storage. +These types of quotas are useful to protect against exhaustion of storage resources. For example, you may +want to quota the number of secrets in a server given their large size. Too many secrets in a cluster can +actually prevent servers and controllers from starting! You may choose to quota jobs to protect against +a poorly configured cronjob creating too many jobs in a namespace causing a denial of service. + +Prior to the 1.9 release, it was possible to do generic object count quota on a limited set of resources. +In addition, it is possible to further constrain quota for particular resources by their type. + +The following types are supported: | Resource Name | Description | | ------------------------------- | ------------------------------------------------- | @@ -109,11 +134,9 @@ are supported: | `secrets` | The total number of secrets that can exist in the namespace. | For example, `pods` quota counts and enforces a maximum on the number of `pods` -created in a single namespace. - -You might want to set a pods quota on a namespace -to avoid the case where a user creates many small pods and exhausts the cluster's -supply of Pod IPs. +created in a single namespace that are not terminal. You might want to set a `pods` +quota on a namespace to avoid the case where a user creates many small pods and +exhausts the cluster's supply of Pod IPs. ## Quota Scopes @@ -156,9 +179,9 @@ then it requires that every incoming container specifies an explicit limit for t Kubectl supports creating, updating, and viewing quotas: ```shell -$ kubectl create namespace myspace +kubectl create namespace myspace -$ cat < compute-resources.yaml +cat < compute-resources.yaml apiVersion: v1 kind: ResourceQuota metadata: @@ -171,9 +194,9 @@ spec: limits.cpu: "2" limits.memory: 2Gi EOF -$ kubectl create -f ./compute-resources.yaml --namespace=myspace +kubectl create -f ./compute-resources.yaml --namespace=myspace -$ cat < object-counts.yaml +cat < object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: @@ -187,14 +210,14 @@ spec: services: "10" services.loadbalancers: "2" EOF -$ kubectl create -f ./object-counts.yaml --namespace=myspace +kubectl create -f ./object-counts.yaml --namespace=myspace -$ kubectl get quota --namespace=myspace +kubectl get quota --namespace=myspace NAME AGE compute-resources 30s object-counts 32s -$ kubectl describe quota compute-resources --namespace=myspace +kubectl describe quota compute-resources --namespace=myspace Name: compute-resources Namespace: myspace Resource Used Hard @@ -205,7 +228,7 @@ pods 0 4 requests.cpu 0 1 requests.memory 0 1Gi -$ kubectl describe quota object-counts --namespace=myspace +kubectl describe quota object-counts --namespace=myspace Name: object-counts Namespace: myspace Resource Used Hard @@ -218,6 +241,27 @@ services 0 10 services.loadbalancers 0 2 ``` +Kubectl also supports object count quota for all standard namespaced resources +using the syntax `count/.`: + +```shell +kubectl create namespace myspace + +kubectl create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 --namespace=myspace + +kubectl run nginx --image=nginx --replicas=2 --namespace=myspace + +kubectl describe quota --namespace=myspace +Name: test +Namespace: myspace +Resource Used Hard +-------- ---- ---- +count/deployments.extensions 1 2 +count/pods 2 3 +count/replicasets.extensions 1 4 +count/secrets 1 4 +``` + ## Quota and Cluster Capacity Resource Quota objects are independent of the Cluster Capacity. They are From 6e8384d1b427a22a715ea48a45cf1597e3279869 Mon Sep 17 00:00:00 2001 From: Michelle Au Date: Sat, 24 Feb 2018 13:12:44 -0800 Subject: [PATCH 016/117] Update reclaim policy documentation to be consistent with field names (#7487) --- docs/concepts/storage/persistent-volumes.md | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/docs/concepts/storage/persistent-volumes.md b/docs/concepts/storage/persistent-volumes.md index 50566a510c848..97fd66e6e07c5 100644 --- a/docs/concepts/storage/persistent-volumes.md +++ b/docs/concepts/storage/persistent-volumes.md @@ -98,20 +98,24 @@ Finalizers: [kubernetes.io/pvc-protection] When a user is done with their volume, they can delete the PVC objects from the API which allows reclamation of the resource. The reclaim policy for a `PersistentVolume` tells the cluster what to do with the volume after it has been released of its claim. Currently, volumes can either be Retained, Recycled or Deleted. -#### Retaining +#### Retain -The Retain reclaim policy allows for manual reclamation of the resource. When the `PersistentVolumeClaim` is deleted, the `PersistentVolume` still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps. +The `Retain` reclaim policy allows for manual reclamation of the resource. When the `PersistentVolumeClaim` is deleted, the `PersistentVolume` still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps. 1. Delete the `PersistentVolume`. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted. 1. Manually clean up the data on the associated storage asset accordingly. 1. Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new `PersistentVolume` with the storage asset definition. -#### Recycling +#### Delete -**Warning:** The recycling reclaim policy is being deprecated. Instead, the recommended approach is to use dynamic provisioning. +For volume plugins that support the `Delete` reclaim policy, deletion removes both the `PersistentVolume` object from Kubernetes, as well as the associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume. Volumes that were dynamically provisioned inherit the [reclaim policy of their `StorageClass`](#reclaim-policy), which defaults to `Delete`. The administrator should configure the `StorageClass` according to users' expectations, otherwise the PV must be edited or patched after it is created. See [Change the Reclaim Policy of a PersistentVolume](/docs/tasks/administer-cluster/change-pv-reclaim-policy/). + +#### Recycle + +**Warning:** The `Recycle` reclaim policy is deprecated. Instead, the recommended approach is to use dynamic provisioning. {: .warning} -If supported by appropriate volume plugin, recycling performs a basic scrub (`rm -rf /thevolume/*`) on the volume and makes it available again for a new claim. +If supported by the underlying volume plugin, the `Recycle` reclaim policy performs a basic scrub (`rm -rf /thevolume/*`) on the volume and makes it available again for a new claim. However, an administrator can configure a custom recycler pod template using the Kubernetes controller manager command line arguments as described [here](/docs/admin/kube-controller-manager/). The custom recycler pod template must contain a `volumes` specification, as shown in the example below: @@ -138,11 +142,6 @@ spec: However, the particular path specified in the custom recycler pod template in the `volumes` part is replaced with the particular path of the volume that is being recycled. -#### Deleting - -For volume plugins that support the Delete reclaim policy, deletion removes both the `PersistentVolume` object from Kubernetes, as well as deleting the associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume. Volumes that were dynamically provisioned inherit the [reclaim policy of their `StorageClass`](#reclaim-policy), which defaults to Delete. The administrator should configure the `StorageClass` according to users' expectations, otherwise the PV must be edited or patched after it is created. See [Change the Reclaim Policy of a PersistentVolume](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). - - ### Expanding Persistent Volumes Claims Kubernetes 1.8 added Alpha support for expanding persistent volumes. In v1.9, the following volume types support expanding Persistent volume claims: From c1beb824c43e7173a57ddf3015ece6c4ae91e09d Mon Sep 17 00:00:00 2001 From: Matt Braymer-Hayes Date: Sat, 24 Feb 2018 13:13:46 -0800 Subject: [PATCH 017/117] Update master-node-communication.md (#7467) Remove out-of-date reference to TODO (#7314). --- docs/concepts/architecture/master-node-communication.md | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/docs/concepts/architecture/master-node-communication.md b/docs/concepts/architecture/master-node-communication.md index 57e6f860ada48..ee957de22eace 100644 --- a/docs/concepts/architecture/master-node-communication.md +++ b/docs/concepts/architecture/master-node-communication.md @@ -43,13 +43,7 @@ The `kubernetes` service (in all namespaces) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver. -The master components communicate with the cluster apiserver over the -insecure (not encrypted or authenticated) port. This port is typically only -exposed on the localhost interface of the master machine, so that the master -components, all running on the same machine, can communicate with the -cluster apiserver. Over time, the master components will be migrated to use -the secure port with authentication and authorization (see -[#13598](https://github.com/kubernetes/kubernetes/issues/13598)). +The master components also communicate with the cluster apiserver over the secure port. As a result, the default operating mode for connections from the cluster (nodes and pods running on the nodes) to the master is secured by default From 97837c078c3b236cce3e58e27efe12bf2214a209 Mon Sep 17 00:00:00 2001 From: James Hill-Khurana Date: Sat, 24 Feb 2018 16:14:46 -0500 Subject: [PATCH 018/117] Fix Kubermatic Links (#7435) The Kubermatic Links now redirect to loodse.com. --- docs/setup/pick-right-solution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/setup/pick-right-solution.md b/docs/setup/pick-right-solution.md index 381fd9b5db19f..5df752ea5b994 100644 --- a/docs/setup/pick-right-solution.md +++ b/docs/setup/pick-right-solution.md @@ -59,7 +59,7 @@ a Kubernetes cluster from scratch. * [Giant Swarm](https://giantswarm.io/product/) offers managed Kubernetes clusters in their own datacenter, on-premises, or on public clouds. -* [Kubermatic](https://kubermatic.io) provides managed Kubernetes clusters for various public clouds, including AWS and Digital Ocean, as well as on-premises with OpenStack integration. +* [Kubermatic](https://www.loodse.com) provides managed Kubernetes clusters for various public clouds, including AWS and Digital Ocean, as well as on-premises with OpenStack integration. # Turnkey Cloud Solutions @@ -82,7 +82,7 @@ These solutions allow you to create Kubernetes clusters on your internal, secure few commands. * [IBM Cloud Private](https://www.ibm.com/cloud-computing/products/ibm-cloud-private/) -* [Kubermatic](https://kubermatic.io/) +* [Kubermatic](https://www.loodse.com) # Custom Solutions From a49701cf4b5b1bfc46ee7510bb8557d9fbcf4885 Mon Sep 17 00:00:00 2001 From: Joseph Herlant Date: Sat, 24 Feb 2018 13:17:45 -0800 Subject: [PATCH 019/117] Fix explanation about eviction threshold (#7311) --- docs/tasks/administer-cluster/out-of-resource.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tasks/administer-cluster/out-of-resource.md b/docs/tasks/administer-cluster/out-of-resource.md index 62b090d223f25..77baa30ef86c7 100644 --- a/docs/tasks/administer-cluster/out-of-resource.md +++ b/docs/tasks/administer-cluster/out-of-resource.md @@ -312,7 +312,7 @@ To facilitate this scenario, the `kubelet` would be launched as follows: Implicit in this configuration is the understanding that "System reserved" should include the amount of memory covered by the eviction threshold. -To reach that capacity, either some Pod is using more than its request, or the system is using more than `500Mi`. +To reach that capacity, either some Pod is using more than its request, or the system is using more than `1.5Gi - 500Mi = 1Gi`. This configuration ensures that the scheduler does not place Pods on a node that immediately induce memory pressure and trigger eviction assuming those Pods use less than their configured request. From 4e248ae8f40ec224220c2309f3134da7061fd212 Mon Sep 17 00:00:00 2001 From: Martin Mosegaard Amdisen Date: Sat, 24 Feb 2018 22:18:44 +0100 Subject: [PATCH 020/117] Update kubeadm-upgrade.md (#7335) Added link to 1.9 upgrades. I am uncertain about the right document for `1.8.x` to `1.8.y` upgrades, as both these documents state that is their purpose: - https://kubernetes.io/docs/tasks/administer-cluster/kubeadm-upgrade-1-8/ - https://kubernetes.io/docs/tasks/administer-cluster/kubeadm-upgrade-1-9/ So I have left the link for that scenario as it is. --- docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md b/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md index e74ae581e5df7..d65b7d5947783 100755 --- a/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md +++ b/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md @@ -21,6 +21,8 @@ Please check these documents out for more detailed how-to-upgrade guidance: * [1.7.x to 1.7.y upgrades](/docs/tasks/administer-cluster/kubeadm-upgrade-1-8/) * [1.7 to 1.8 upgrades](/docs/tasks/administer-cluster/kubeadm-upgrade-1-8/) * [1.8.x to 1.8.y upgrades](/docs/tasks/administer-cluster/kubeadm-upgrade-1-8/) + * [1.8.x to 1.9.x upgrades](/docs/tasks/administer-cluster/kubeadm-upgrade-1-9/) + * [1.9.x to 1.9.y upgrades](/docs/tasks/administer-cluster/kubeadm-upgrade-1-9/) ## kubeadm upgrade plan {#cmd-upgrade-plan} {% include_relative generated/kubeadm_upgrade_plan.md %} From 0c0e2c636d30500e5258574797a75013adeb2699 Mon Sep 17 00:00:00 2001 From: Philippe Pepiot Date: Sat, 24 Feb 2018 23:14:44 +0100 Subject: [PATCH 021/117] pull image private: fix example variable name (#7510) In the docs the secret is called regcred instead of regsecret --- docs/tasks/configure-pod-container/private-reg-pod.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tasks/configure-pod-container/private-reg-pod.yaml b/docs/tasks/configure-pod-container/private-reg-pod.yaml index 703b7c4c14f3a..4029588dd0758 100644 --- a/docs/tasks/configure-pod-container/private-reg-pod.yaml +++ b/docs/tasks/configure-pod-container/private-reg-pod.yaml @@ -7,5 +7,5 @@ spec: - name: private-reg-container image: imagePullSecrets: - - name: regsecret + - name: regcred From 01098a1d42346de49752ff84aba7b90685e8465d Mon Sep 17 00:00:00 2001 From: Markus Banfi Date: Sun, 25 Feb 2018 17:14:43 +0100 Subject: [PATCH 022/117] Fix reference to Python client library (#7504) (#7505) * Fix reference to Python client library (#7504) * fixup! Fix reference to Python client library (#7504) --- docs/reference/client-libraries.md | 2 +- docs/reference/index.md | 2 +- docs/tasks/access-application-cluster/access-cluster.md | 4 ++-- docs/tasks/administer-cluster/access-cluster-api.md | 4 ++-- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/reference/client-libraries.md b/docs/reference/client-libraries.md index 023a788773609..0db4644822823 100644 --- a/docs/reference/client-libraries.md +++ b/docs/reference/client-libraries.md @@ -29,7 +29,7 @@ Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery | Language | Client Library | Sample Programs | |----------|----------------|-----------------| | Go | [github.com/kubernetes/client-go/](https://github.com/kubernetes/client-go/) | [browse](https://github.com/kubernetes/client-go/tree/master/examples) -| Python | [github.com/kubernetes-incubator/client-python/](https://github.com/kubernetes-incubator/client-python/) | [browse](https://github.com/kubernetes-incubator/client-python/tree/master/examples) +| Python | [github.com/kubernetes-client/python/](https://github.com/kubernetes-client/python/) | [browse](https://github.com/kubernetes-client/python/tree/master/examples) | Java | [github.com/kubernetes-client/java](https://github.com/kubernetes-client/java/) | [browse](https://github.com/kubernetes-client/java#installation) | dotnet | [github.com/kubernetes-client/csharp](https://github.com/kubernetes-client/csharp) | [browse](https://github.com/kubernetes-client/csharp/tree/master/examples/simple) diff --git a/docs/reference/index.md b/docs/reference/index.md index 7659541fed7de..428a468722147 100644 --- a/docs/reference/index.md +++ b/docs/reference/index.md @@ -21,7 +21,7 @@ To call the Kubernetes API from a programming language, you can use client libraries: - [Kubernetes Go client library](https://github.com/kubernetes/client-go/) -- [Kubernetes Python client library](https://github.com/kubernetes-incubator/client-python) +- [Kubernetes Python client library](https://github.com/kubernetes-client/python) ## CLI Reference diff --git a/docs/tasks/access-application-cluster/access-cluster.md b/docs/tasks/access-application-cluster/access-cluster.md index 1e053ef34a81f..a2826fb49d510 100644 --- a/docs/tasks/access-application-cluster/access-cluster.md +++ b/docs/tasks/access-application-cluster/access-cluster.md @@ -135,10 +135,10 @@ If the application is deployed as a Pod in the cluster, please refer to the [nex #### Python client -To use [Python client](https://github.com/kubernetes-incubator/client-python), run the following command: `pip install kubernetes`. See [Python Client Library page](https://github.com/kubernetes-incubator/client-python) for more installation options. +To use [Python client](https://github.com/kubernetes-client/python), run the following command: `pip install kubernetes`. See [Python Client Library page](https://github.com/kubernetes-client/python) for more installation options. The Python client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) -as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes-incubator/client-python/tree/master/examples/example1.py). +as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes-client/python/tree/master/examples/example1.py). #### Other languages diff --git a/docs/tasks/administer-cluster/access-cluster-api.md b/docs/tasks/administer-cluster/access-cluster-api.md index 8cf2ae92bc5d7..f107183500ce2 100644 --- a/docs/tasks/administer-cluster/access-cluster-api.md +++ b/docs/tasks/administer-cluster/access-cluster-api.md @@ -145,10 +145,10 @@ If the application is deployed as a Pod in the cluster, please refer to the [nex #### Python client -To use [Python client](https://github.com/kubernetes-incubator/client-python), run the following command: `pip install kubernetes` See [Python Client Library page](https://github.com/kubernetes-incubator/client-python) for more installation options. +To use [Python client](https://github.com/kubernetes-client/python), run the following command: `pip install kubernetes` See [Python Client Library page](https://github.com/kubernetes-client/python) for more installation options. The Python client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/) -as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://github.com/kubernetes-incubator/client-python/tree/master/examples/example1.py): +as the kubectl CLI does to locate and authenticate to the API server. See this [example](https://github.com/kubernetes-client/python/tree/master/examples/example1.py): ```python from kubernetes import client, config From 213b87adc0e2a05899b9bec91011b0ea4b1509b0 Mon Sep 17 00:00:00 2001 From: Mike Date: Sun, 25 Feb 2018 15:39:44 -0500 Subject: [PATCH 023/117] * Changed range syntax to use paren to indicate range {0..N-1}. (#7509) * * Changed range syntax to use paren to indicate range {0..N-1}. Fixed #7507 * Updated wording per feedback from @enisoc. --- docs/concepts/workloads/controllers/statefulset.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/workloads/controllers/statefulset.md b/docs/concepts/workloads/controllers/statefulset.md index 146be2b043779..4bf49e7a70cbe 100644 --- a/docs/concepts/workloads/controllers/statefulset.md +++ b/docs/concepts/workloads/controllers/statefulset.md @@ -116,7 +116,7 @@ regardless of which node it's (re)scheduled on. ### Ordinal Index For a StatefulSet with N replicas, each Pod in the StatefulSet will be -assigned an integer ordinal, in the range [0,N], that is unique over the Set. +assigned an integer ordinal, from 0 up through N-1, that is unique over the Set. ### Stable Network ID From 75aeaa3ddb1d494bb9d81ccb88bad40aee8d7f15 Mon Sep 17 00:00:00 2001 From: Logan Rakai Date: Sun, 25 Feb 2018 21:25:42 -0700 Subject: [PATCH 024/117] Update networking.md to avoid the use of "we" (#7514) Proposed changes to avoid style guide antipattern https://kubernetes.io/docs/home/contribute/style-guide/#avoid-using-we --- .../cluster-administration/networking.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/docs/concepts/cluster-administration/networking.md b/docs/concepts/cluster-administration/networking.md index 559098f65b3d9..8421c9ee71154 100644 --- a/docs/concepts/cluster-administration/networking.md +++ b/docs/concepts/cluster-administration/networking.md @@ -20,15 +20,15 @@ default. There are 4 distinct networking problems to solve: ## Summary Kubernetes assumes that pods can communicate with other pods, regardless of -which host they land on. We give every pod its own IP address so you do not +which host they land on. Every pod gets its own IP address so you do not need to explicitly create links between pods and you almost never need to deal with mapping container ports to host ports. This creates a clean, backwards-compatible model where pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration. -To achieve this we must impose some requirements on how you set up your cluster -networking. +There are requirements imposed on how you set up your cluster networking to +achieve this. ## Docker model @@ -84,7 +84,7 @@ applies IP addresses at the `Pod` scope - containers within a `Pod` share their network namespaces - including their IP address. This means that containers within a `Pod` can all reach each other's ports on `localhost`. This does imply that containers within a `Pod` must coordinate port usage, but this is no -different than processes in a VM. We call this the "IP-per-pod" model. This +different than processes in a VM. This is called the "IP-per-pod" model. This is implemented in Docker as a "pod container" which holds the network namespace open while "app containers" (the things the user specified) join that namespace with Docker's `--net=container:` function. @@ -139,15 +139,15 @@ people have reported success with Flannel and Kubernetes. ### Google Compute Engine (GCE) -For the Google Compute Engine cluster configuration scripts, we use [advanced -routing](https://cloud.google.com/vpc/docs/routes) to +For the Google Compute Engine cluster configuration scripts, [advanced +routing](https://cloud.google.com/vpc/docs/routes) is used to assign each VM a subnet (default is `/24` - 254 IPs). Any traffic bound for that subnet will be routed directly to the VM by the GCE network fabric. This is in addition to the "main" IP address assigned to the VM, which is NAT'ed for outbound internet access. A linux bridge (called `cbr0`) is configured to exist on that subnet, and is passed to docker's `--bridge` flag. -We start Docker with: +Docker is started with: ```shell DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false" @@ -161,8 +161,8 @@ each other and `Nodes` over the `cbr0` bridge. Those IPs are all routable within the GCE project network. GCE itself does not know anything about these IPs, though, so it will not NAT -them for outbound internet traffic. To achieve that we use an iptables rule to -masquerade (aka SNAT - to make it seem as if packets came from the `Node` +them for outbound internet traffic. To achieve that an iptables rule is used +to masquerade (aka SNAT - to make it seem as if packets came from the `Node` itself) traffic that is bound for IPs outside the GCE project network (10.0.0.0/8). @@ -170,7 +170,7 @@ itself) traffic that is bound for IPs outside the GCE project network iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE ``` -Lastly we enable IP forwarding in the kernel (so the kernel will process +Lastly IP forwarding is enabled in the kernel (so the kernel will process packets for bridged containers): ```shell From 0955d6be9034acce51dc58d794afe13297e14498 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pawe=C5=82=20Pra=C5=BCak?= Date: Mon, 26 Feb 2018 16:45:44 +0100 Subject: [PATCH 025/117] kubectl/cheatsheet - add command for sorting events (#7519) - add a command to "List Events sorted by timestamp" workaround for #29838 --- docs/reference/kubectl/cheatsheet.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/reference/kubectl/cheatsheet.md b/docs/reference/kubectl/cheatsheet.md index 39aaaa3421fbe..284a1a98c4465 100644 --- a/docs/reference/kubectl/cheatsheet.md +++ b/docs/reference/kubectl/cheatsheet.md @@ -136,6 +136,9 @@ $ JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@. # List all Secrets currently in use by a pod $ kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq + +# List Events sorted by timestamp +$ kubectl get events --sort-by=.metadata.creationTimestamp ``` ## Updating Resources From e0ca08771edeefb66a39623ccad3d00d7af05ef8 Mon Sep 17 00:00:00 2001 From: Weibin Lin Date: Mon, 26 Feb 2018 23:49:43 +0800 Subject: [PATCH 026/117] Update scratch.md (#7516) --- docs/getting-started-guides/scratch.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index 18f2e546f576f..c364469eac583 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -91,7 +91,7 @@ to implement one of the above options: - You can also write your own. - **Compile support directly into Kubernetes** - This can be done by implementing the "Routes" interface of a Cloud Provider module. - - The Google Compute Engine ([GCE](/docs/getting-started-guides/gce/)/) and [AWS](/docs/getting-started-guides/aws/) guides use this approach. + - The Google Compute Engine ([GCE](/docs/getting-started-guides/gce/)) and [AWS](/docs/getting-started-guides/aws/) guides use this approach. - **Configure the network external to Kubernetes** - This can be done by manually running commands, or through a set of externally maintained scripts. - You have to implement this yourself, but it can give you an extra degree of flexibility. @@ -116,7 +116,7 @@ You will need to select an address range for the Pod IPs. Note that IPv6 is not Kubernetes also allocates an IP to each [service](/docs/concepts/services-networking/service/). However, service IPs do not necessarily need to be routable. The kube-proxy takes care of translating Service IPs to Pod IPs before traffic leaves the node. You do -need to Allocate a block of IPs for services. Call this +need to allocate a block of IPs for services. Call this `SERVICE_CLUSTER_IP_RANGE`. For example, you could set `SERVICE_CLUSTER_IP_RANGE="10.0.0.0/16"`, allowing 65534 distinct services to be active at once. Note that you can grow the end of this range, but you From 1458363f5a7f3fb11ebdbf13e45880b2fb4f7a83 Mon Sep 17 00:00:00 2001 From: Sean Dague Date: Mon, 26 Feb 2018 08:15:43 -0800 Subject: [PATCH 027/117] Change glossary to sort_natural (#7523) The default sort in liquid is ASCIIbetical, but there is a built in sort_natural that allows you to sort things in human sensible ways. This updates the glossary list to use sort_natural instead. Fixes issue #7491 --- docs/reference/glossary.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/reference/glossary.md b/docs/reference/glossary.md index 4500fb5e1e5ce..8fd6ef94ac7b7 100644 --- a/docs/reference/glossary.md +++ b/docs/reference/glossary.md @@ -37,7 +37,7 @@ default_active_tag: fundamental

Click on the [+] indicators below to get a longer explanation for any particular term.

-{% assign glossary_terms = site.data.glossary | where_exp: "term", "term.id != '_example'" | sort: 'name' %} +{% assign glossary_terms = site.data.glossary | where_exp: "term", "term.id != '_example'" | sort_natural: 'name' %}
    {% for term in glossary_terms %} From 7d85f99953a521fc1b903b42de1992905960e4eb Mon Sep 17 00:00:00 2001 From: grodrigues3 Date: Mon, 26 Feb 2018 09:14:43 -0800 Subject: [PATCH 028/117] Rebuild the community docs (#7498) * rebuild and update community imported docs * redundant page titles * Redundant page title removed --- docs/imported/community/devel.md | 5 +- docs/imported/community/guide.md | 121 +++++++------ docs/imported/community/keps.md | 253 +++++++++++---------------- docs/imported/community/mentoring.md | 32 ++-- 4 files changed, 187 insertions(+), 224 deletions(-) diff --git a/docs/imported/community/devel.md b/docs/imported/community/devel.md index 62fd70223f736..750c4d563e3fb 100755 --- a/docs/imported/community/devel.md +++ b/docs/imported/community/devel.md @@ -15,7 +15,7 @@ Guide](http://kubernetes.io/docs/admin/). * **GitHub Issues** ([issues.md](https://github.com/kubernetes/community/tree/master/contributors/devel/issues.md)): How incoming issues are triaged. -* **Pull Request Process** ([pull-requests.md](https://github.com/kubernetes/community/tree/master/contributors/devel/pull-requests.md)): When and why pull requests are closed. +* **Pull Request Process** ([/contributors/guide/pull-requests.md](https://github.com/kubernetes/community/tree/master/contributors/guide/pull-requests.md)): When and why pull requests are closed. * **Getting Recent Builds** ([getting-builds.md](https://github.com/kubernetes/community/tree/master/contributors/devel/getting-builds.md)): How to get recent builds including the latest builds that pass CI. @@ -39,7 +39,7 @@ Guide](http://kubernetes.io/docs/admin/). ([instrumentation.md](https://github.com/kubernetes/community/tree/master/contributors/devel/instrumentation.md)): How to add a new metrics to the Kubernetes code base. -* **Coding Conventions** ([coding-conventions.md](https://github.com/kubernetes/community/tree/master/contributors/devel/coding-conventions.md)): +* **Coding Conventions** ([coding-conventions.md](https://github.com/kubernetes/community/tree/master/contributors/devel/../guide/coding-conventions.md)): Coding style advice for contributors. * **Document Conventions** ([how-to-doc.md](https://github.com/kubernetes/community/tree/master/contributors/devel/how-to-doc.md)) @@ -78,4 +78,3 @@ Guide](http://kubernetes.io/docs/admin/). ## Building releases See the [kubernetes/release](https://github.com/kubernetes/release) repository for details on creating releases and related tools and helper scripts. -ed tools and helper scripts. diff --git a/docs/imported/community/guide.md b/docs/imported/community/guide.md index 37e5a5e684c2a..be6bba58ae915 100755 --- a/docs/imported/community/guide.md +++ b/docs/imported/community/guide.md @@ -1,29 +1,16 @@ --- title: Kubernetes Contributor Guide owner: sig-contributor-experience +notitle: true --- -**OWNER:** -sig-contributor-experience +# Kubernetes Contributor Guide ## Disclaimer -Hello! This is the starting point for our brand new contributor guide, currently underway as per [issue#6102](https://github.com/kubernetes/website/issues/6102) and in need of help. Please be patient, or fix a section below that needs improvement, and submit a pull request! -Many of the links below should lead to relevant documents scattered across the community repository. Often, the linked instructions need to be updated or cleaned up. +Hello! This is the starting point for our brand new contributor guide, currently underway as per [issue#6102](https://github.com/kubernetes/website/issues/6102) and is in need of help. +Please be patient, or fix a section below that needs improvement, and submit a pull request! Feel free to browse the [open issues](https://github.com/kubernetes/community/issues?q=is%3Aissue+is%3Aopen+label%3Aarea%2Fcontributor-guide) and file new ones, all feedback welcome! -* If you do so, please move the relevant file from its previous location to the community/contributors/guide folder, and delete its previous location. -* Our goal is that all contributor guide specific files live in this folder. - -Please find _Improvements needed_ sections below and help us out. - -For example: - -_Improvements needed_ -* kubernetes/community/CONTRIBUTING.md -> Needs a rewrite - -* kubernetes/community/README.md -> Needs a rewrite - -* Individual SIG contributing documents -> add a link to this guide # Welcome @@ -31,10 +18,10 @@ Welcome to Kubernetes! This document is the single source of truth for how to co - [Before you get started](#before-you-get-started) - [Sign the CLA](#sign-the-cla) + - [Code of Conduct](#code-of-conduct) - [Setting up your development environment](#setting-up-your-development-environment) - - [Community Expectations](#community-expectations) - - [Code of Conduct](#code-of-conduct) + - [Community Expectations and Roles](#community-expectations-and-roles) - [Thanks](#thanks) - [Your First Contribution](#your-first-contribution) - [Find something to work on](#find-something-to-work-on) @@ -51,9 +38,9 @@ Welcome to Kubernetes! This document is the single source of truth for how to co - [Documentation](#documentation) - [Issues Management or Triage](#issues-management-or-triage) - [Community](#community) + - [Communication](#communication-1) - [Events](#events) - [Meetups](#meetups) - - [KubeCon](#kubecon) - [Mentorship](#mentorship) # Before you get started @@ -62,36 +49,26 @@ Welcome to Kubernetes! This document is the single source of truth for how to co Before you can contribute, you will need to sign the [Contributor License Agreement](https://github.com/kubernetes/community/tree/master/CLA.md). -## Setting up your development environment +## Code of Conduct -If you haven’t set up your environment, please find resources [here](https://github.com/kubernetes/community/tree/master/contributors/devel). These resources are not well organized currently; please have patience as we are working on it. +Please make sure to read and observe our [Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md). -_Improvements needed_ -* A new developer guide will be created and linked to in this section. +## Setting up your development environment - * RyanJ from Red Hat is working on this +If you haven’t set up your environment, please find resources [here](https://github.com/kubernetes/community/tree/master/contributors/devel). -## Community Expectations +## Community Expectations and Roles Kubernetes is a community project. Consequently, it is wholly dependent on its community to provide a productive, friendly and collaborative environment. -The first and foremost goal of the Kubernetes community to develop orchestration technology that radically simplifies the process of creating reliable distributed systems. However a second, equally important goal is the creation of a community that fosters easy, agile development of such orchestration systems. - -We therefore describe the expectations for members of the Kubernetes community. This document is intended to be a living one that evolves as the community evolves via the same pull request and code review process that shapes the rest of the project. It currently covers the expectations of conduct that govern all members of the community as well as the expectations around code review that govern all active contributors to Kubernetes. - -### Code of Conduct - -Please make sure to read and observe our [Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) - -### Thanks - -Many thanks in advance to everyone who contributes their time and effort to making Kubernetes both a successful system as well as a successful community. The strength of our software shines in the strengths of each individual community member. Thanks! +- Read and review the [Community Expectations](https://github.com/kubernetes/community/tree/master/contributors/guide/community-expectations.md) for an understand of code and review expectations. +- See [Community Membership](https://github.com/kubernetes/community/tree/master/community-membership.md) for a list the various responsibilities of contributor roles. You are encouraged to move up this contributor ladder as you gain experience. # Your First Contribution Have you ever wanted to contribute to the coolest cloud technology? We will help you understand the organization of the Kubernetes project and direct you to the best places to get started. You'll be able to pick up issues, write code to fix them, and get your work reviewed and merged. -Please be aware that due to the large number of issues our triage team deals with, we cannot offer technical support in GitHub issues. If you have questions about the development process, feel free to jump into our [Slack Channel](http://slack.k8s.io/) or join our [mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). You can also ask questions on [ServerFault](https://serverfault.com/questions/tagged/kubernetes) or [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes). The Kubernetes team scans Stack Overflow on a regular basis, and will try to ensure your questions don't go unanswered. +Please be aware that due to the large number of issues our triage team deals with, we cannot offer technical support in GitHub issues. If you have questions about the development process, feel free to jump into our [Slack Channel](http://slack.k8s.io/) or join our [mailing list](https://groups.google.com/forum/#!forum/kubernetes-dev). You can also ask questions on [ServerFault](https://serverfault.com/questions/tagged/kubernetes) or [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes). The Kubernetes team scans Stack Overflow on a regular basis and will try to ensure your questions don't go unanswered. ## Find something to work on @@ -109,7 +86,8 @@ Another good strategy is to find a documentation improvement, such as a missing/ #### Sig structure You may have noticed that some repositories in the Kubernetes Organization are owned by Special Interest Groups, or SIGs. We organize the Kubernetes community into SIGs in order to improve our workflow and more easily manage what is a very large community project. The developers within each SIG have autonomy and ownership over that SIG's part of Kubernetes. -SIGs also have their own CONTRIBUTING.md files, which may contain extra information or guidelines in addition to these general ones. These are located in the SIG specific community documentation directories, for example: sig-docs' is in the kubernetes/community repo's [/sig-docs/CONTRIBUTING.md](https://github.com/kubernetes/community/tree/master/sig-docs/CONTRIBUTING.md) file and similarly for other SIGs. + +Some SIGs also have their own `CONTRIBUTING.md` files, which may contain extra information or guidelines in addition to these general ones. These are located in the SIG-specific community directories. For example: the contributor's guide for SIG CLI is located in the *kubernetes/community* repo, as [`/sig-cli/CONTRIBUTING.md`](https://github.com/kubernetes/community/tree/master/sig-cli/CONTRIBUTING.md). Like everything else in Kubernetes, a SIG is an open, community, effort. Anybody is welcome to jump into a SIG and begin fixing issues, critiquing design proposals and reviewing code. SIGs have regular [video meetings](https://kubernetes.io/community/) which everyone is welcome to. Each SIG has a kubernetes slack channel that you can join as well. @@ -119,23 +97,23 @@ show up to one of the [bi-weekly meetings](https://docs.google.com/document/d/1q #### Find a SIG that is related to your contribution -Finding the appropriate SIG for your contribution will help you ask questions in the correct place and give your contribution higher visibility and a faster community response. +Finding the appropriate SIG for your contribution and adding a SIG label will help you ask questions in the correct place and give your contribution higher visibility and a faster community response. For Pull Requests, the automatically assigned reviewer will add a SIG label if you haven't done so. See [Open A Pull Request](#open-a-pull-request) below. -For Issues we are still working on a more automated workflow. Since SIGs do not directly map onto Kubernetes subrepositories, it may be difficult to find which SIG your contribution belongs in. Here is the [list of SIGs](https://github.com/kubernetes/community/tree/master/sig-list.md). Determine which is most likely related to your contribution. +For Issues, we are still working on a more automated workflow. Since SIGs do not directly map onto Kubernetes subrepositories, it may be difficult to find which SIG your contribution belongs in. Here is the [list of SIGs](https://github.com/kubernetes/community/tree/master/sig-list.md). Determine which is most likely related to your contribution. -*Example:* if you are filing a cni issue, you should choose SIG-networking. +*Example:* if you are filing a cni issue, you should choose the [Network SIG](http://git.k8s.io/community/sig-network). Add the SIG label in a comment like so: +``` +/sig network +``` Follow the link in the SIG name column to reach each SIGs README. Most SIGs will have a set of GitHub Teams with tags that can be mentioned in a comment on issues and pull requests for higher visibility. If you are not sure about the correct SIG for an issue, you can try SIG-contributor-experience [here](https://github.com/kubernetes/community/tree/master/sig-contributor-experience#github-teams), or [ask in Slack](http://slack.k8s.io/). -_Improvements needed_ - -* Open pull requests with all applicable SIGs to not have duplicate information in their CONTRIBUTING.md and instead link here. Keep it light, keep it clean, have only one source of truth. - ### File an Issue Not ready to contribute code, but see something that needs work? While the community encourages everyone to contribute code, it is also appreciated when someone reports an issue (aka problem). Issues should be filed under the appropriate Kubernetes subrepository. +Check the [issue triage guide](https://github.com/kubernetes/community/tree/master/contributors/guide/./issue-triage.md) for more information. *Example:* a documentation issue should be opened to [kubernetes/website](https://github.com/kubernetes/website/issues). @@ -143,19 +121,21 @@ Make sure to adhere to the prompted submission guidelines while opening an issue # Contributing -(From:[here](https://github.com/kubernetes/community/tree/master/contributors/devel/collab.md)) - -Kubernetes is open source, but many of the people working on it do so as their day job. In order to avoid forcing people to be "at work" effectively 24/7, we want to establish some semi-formal protocols around development. Hopefully these rules make things go more smoothly. If you find that this is not the case, please complain loudly. +Kubernetes is open source, but many of the people working on it do so as their day job. In order to avoid forcing people to be "at work" effectively 24/7, we want to establish some semi-formal protocols around development. Hopefully, these rules make things go more smoothly. If you find that this is not the case, please complain loudly. As a potential contributor, your changes and ideas are welcome at any hour of the day or night, weekdays, weekends, and holidays. Please do not ever hesitate to ask a question or send a pull request. -Our community guiding principles on how to create great code as a big group are found [here](https://github.com/kubernetes/community/tree/master/contributors/devel/collab.md). Beginner focused information can be found below in [Open a Pull Request](#open-a-pull-request) and [Code Review](#code-review). +Our community guiding principles on how to create great code as a big group are found [here](https://github.com/kubernetes/community/tree/master/contributors/devel/collab.md). + +Beginner focused information can be found below in [Open a Pull Request](#open-a-pull-request) and [Code Review](#code-review). + +For quick reference on contributor resources, we have a handy [contributor cheatsheet](https://github.com/kubernetes/community/tree/master/contributors/guide/./contributor-cheatsheet.md) ### Communication It is best to contact your [SIG](#learn-about-sigs) for issues related to the SIG's topic. Your SIG will be able to help you much more quickly than a general question would. -For questions and troubleshooting, please feel free to use any of the methods of communication listed [here](https://github.com/kubernetes/community/tree/master/communication.md). The [kubernetes website](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/) also lists this information. +For general questions and troubleshooting, use the [kubernetes standard lines of communication](https://github.com/kubernetes/community/tree/master/communication.md) and work through the [kubernetes troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/). ## GitHub workflow @@ -163,7 +143,9 @@ To check out code to work on, please refer to [this guide](https://github.com/ku ## Open a Pull Request -Pull requests are often called simply "PR". Kubernetes generally follows the standard [github pull request](https://help.github.com/articles/about-pull-requests/) process, but there is a layer of additional kubernetes specific (and sometimes SIG specific) differences. +Pull requests are often called simply "PR". Kubernetes generally follows the standard [github pull request](https://help.github.com/articles/about-pull-requests/) process, but there is a layer of additional kubernetes specific (and sometimes SIG specific) differences: + +- [Kubernetes-specific github workflow](https://github.com/kubernetes/community/tree/master/contributors/guide/pull-requests.md#the-testing-and-merge-workflow). The first difference you'll see is that a bot will begin applying structured labels to your PR. @@ -174,20 +156,20 @@ Common new contributor PR issues are: * not having correctly signed the CLA ahead of your first PR (see [Sign the CLA](#sign-the-cla) section) * finding the right SIG or reviewer(s) for the PR (see [Code Review](#code-review) section) and following any SIG specific contributing guidelines * dealing with test cases which fail on your PR, unrelated to the changes you introduce (see [Test Flakes](http://velodrome.k8s.io/dashboard/db/bigquery-metrics?orgId=1)) - -The pull request workflow is described in detail [here](https://github.com/kubernetes/community/tree/master/contributors/devel/pull-requests.md#the-testing-and-merge-workflow). +* Not following [scalability good practices](https://github.com/kubernetes/community/tree/master/contributors/guide/scalability-good-practices.md) ## Code Review -For a brief description of the importance of code review, please read [On Code Review](https://github.com/kubernetes/community/tree/master/contributors/devel/community-expectations.md#code-review). There are two aspects of code review: giving and receiving. +For a brief description of the importance of code review, please read [On Code Review](https://github.com/kubernetes/community/tree/master/contributors/guide/community-expectations.md#code-review). There are two aspects of code review: giving and receiving. To make it easier for your PR to receive reviews, consider the reviewers will need you to: +* follow the project [coding conventions](https://github.com/kubernetes/community/tree/master/contributors/guide/coding-conventions.md) * write [good commit messages](https://chris.beams.io/posts/git-commit/) * break large changes into a logical series of smaller patches which individually make easily understandable changes, and in aggregate solve a broader issue * label PRs with appropriate SIGs and reviewers: to do this read the messages the bot sends you to guide you through the PR process -Reviewers, the people giving review, are highly encouraged to revisit the [Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) and must go above and beyond to promote a collaborative, respectful Kubernetes community. When reviewing PRs from others [The Gentle Art of Patch Review](http://sage.thesharps.us/2014/09/01/the-gentle-art-of-patch-review/) suggests an iterative series of focuses which is designed to lead new contributors to positive collaboration without inundating them initially with nuances: +Reviewers, the people giving the review, are highly encouraged to revisit the [Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) and must go above and beyond to promote a collaborative, respectful Kubernetes community. When reviewing PRs from others [The Gentle Art of Patch Review](http://sage.thesharps.us/2014/09/01/the-gentle-art-of-patch-review/) suggests an iterative series of focuses which is designed to lead new contributors to positive collaboration without inundating them initially with nuances: * Is the idea behind the contribution sound? * Is the contribution architected correctly? @@ -201,7 +183,7 @@ The main testing overview document is [here](https://github.com/kubernetes/commu There are three types of test in kubernetes. The location of the test code varies with type, as does the specifics of the environment needed to successfully run the test: -* Unit: These confirm that a particular function behaves as intended. Golang includes native ability for unit testing via the [testing](https://golang.org/pkg/testing/) package. Unit test source code can be found adjacent to the corresponding source code within a given package. For example: functions defined in [kubernetes/cmd/kubeadm/app/util/version.go](https://git.k8s.io/kubernetes/cmd/kubeadm/app/util/version.go) will have unit tests in [kubernetes/cmd/kubeadm/app/util/version_test.go](https://git.k8s.io/kubernetes/cmd/kubeadm/app/util/version_test.go). These are easily run locally be any developer on any OS. +* Unit: These confirm that a particular function behaves as intended. Golang includes a native ability for unit testing via the [testing](https://golang.org/pkg/testing/) package. Unit test source code can be found adjacent to the corresponding source code within a given package. For example: functions defined in [kubernetes/cmd/kubeadm/app/util/version.go](https://git.k8s.io/kubernetes/cmd/kubeadm/app/util/version.go) will have unit tests in [kubernetes/cmd/kubeadm/app/util/version_test.go](https://git.k8s.io/kubernetes/cmd/kubeadm/app/util/version_test.go). These are easily run locally by any developer on any OS. * Integration: These tests cover interactions of package components or interactions between kubernetes components and some other non-kubernetes system resource (eg: etcd). An example would be testing whether a piece of code can correctly store data to or retrieve data from etcd. Integration tests are stored in [kubernetes/test/integration/](https://git.k8s.io/kubernetes/test/integration). Running these can require the developer set up additional functionality on their development system. * End-to-end ("e2e"): These are broad tests of overall kubernetes system behavior and coherence. These are more complicated as they require a functional kubernetes cluster built from the sources to be tested. A separate document [here](https://github.com/kubernetes/community/tree/master/contributors/devel/e2e-tests.md) details e2e testing and test cases themselves can be found in [kubernetes/test/e2e/](https://git.k8s.io/kubernetes/test/e2e). @@ -211,13 +193,12 @@ sig-testing is responsible for that official infrastructure and CI. The associa ## Security -_Improvements needed_ + * Please help write this section. ## Documentation -_Improvements needed_ -* Please help write this section. +- [Contributing to Documentation](https://kubernetes.io/editdocs/) ## Issues Management or Triage @@ -227,15 +208,31 @@ Have you ever noticed the total number of [open issues](https://issues.k8s.io)? If you haven't noticed by now, we have a large, lively, and friendly open-source community. We depend on new people becoming members and regular code contributors, so we would like you to come join us. To find out more about our community structure, different levels of membership and code contributors, please [explore here](https://github.com/kubernetes/community/tree/master/community-membership.md). -_Improvements needed_ +## Communication -* The top level k/community/README.md should be a good starting point for what the community is and does. (see above instructions on rewriting this file) +- [General Information](https://github.com/kubernetes/community/tree/master/communication) ## Events + Kubernetes is the main focus of CloudNativeCon/KubeCon, held twice per year in EMEA and in North America. Information about these and other community events is available on the CNCF [events](https://www.cncf.io/events/) pages. ### Meetups +We follow the general [Cloud Native Computing Foundation guidelines](https://github.com/cncf/meetups) for Meetups. You may also contact Paris Pittman via direct message on Kubernetes Slack (@paris) or by email (parispittman@google.com) + +## Mentorship + +Please learn about our mentoring initiatives [here](http://git.k8s.io/community/mentoring/README.md). + +# Advanced Topics + +This section includes things that need to be documented, but typical contributors do not need to interact with regularly. + +- [OWNERS files](https://github.com/kubernetes/community/tree/master/contributors/guide/owners.md) - The Kubernetes organizations are managed with OWNERS files, which outline which parts of the code are owned by what groups. +EMEA and in North America. Information about these and other community events is available on the CNCF [events](https://www.cncf.io/events/) pages. + +### Meetups + _Improvements needed_ * include link to meetups * information on CNCF support for founding a Meetup diff --git a/docs/imported/community/keps.md b/docs/imported/community/keps.md index 2bdd3a0c3cd57..2c277645d3aa5 100755 --- a/docs/imported/community/keps.md +++ b/docs/imported/community/keps.md @@ -1,32 +1,26 @@ --- title: Kubernetes Enhancement Proposal Process --- - -## Metadata -``` --- kep-number: 1 title: Kubernetes Enhancement Proposal Process authors: - - name: Caleb Miles - github: calebamiles - slack: calebamiles - - name: Joe Beda - github: jbeda - email: joe@heptio.com - slack: jbeda + - "@calebamiles" + - "@jbeda" owning-sig: sig-architecture participating-sigs: - - `kubernetes-wide` + - kubernetes-wide reviewers: - - name: TBD + - name: "@timothysc" approvers: - - name: TBD + - name: "@bgrant0607" editor: - name: TBD + name: "@jbeda" creation-date: 2017-08-22 -status: draft -``` +status: implementable +--- + +# Kubernetes Enhancement Proposal Process ## Table of Contents @@ -63,15 +57,14 @@ A standardized development process for Kubernetes is proposed in order to - support the creation of _high value user facing_ information such as: - an overall project development roadmap - motivation for impactful user facing changes -- support development across multiple repositories beyond `kubernetes/kubernetes` - reserve GitHub issues for tracking work in flight rather than creating "umbrella" issues - ensure community participants are successfully able to drive changes to completion across one or more releases while stakeholders are adequately represented throughout the process -This process is supported by a unit of work called a Kubernetes Enhancement -Proposal or KEP. A KEP attempts to combine aspects of a +This process is supported by a unit of work called a Kubernetes Enhancement Proposal or KEP. +A KEP attempts to combine aspects of a - feature, and effort tracking document - a product requirements document @@ -91,15 +84,7 @@ and communicate upcoming changes to Kubernetes. In a blog post describing the > in a way that someone working in a different environment can understand as a project it is vital to be able to track the chain of custody for a proposed -enhancement from conception through implementation. This proposal does not -attempt to mandate how SIGs track their work internally, however, it is -suggested that SIGs which do not adhere to a process which allows for their hard -work to be explained to others in the wider Kubernetes community will see their -work wallow in the shadows of obscurity. At the very least [survey data][] -suggest that high quality documentation is crucial to project adoption. -Documentation can take many forms and it is imperative to ensure that it is easy -to produce high quality user or developer focused documentation for a complex -project like Kubernetes. +enhancement from conception through implementation. Without a standardized mechanism for describing important enhancements our talented technical writers and product managers struggle to weave a coherent @@ -119,9 +104,7 @@ contained in [design proposals][] is both clear and efficient. The KEP process is intended to create high quality uniform design and implementation documents for SIGs to deliberate. -[tell a story]: https://blog.rust-lang.org/2017/08/31/Rust-1.20.html [road to Go 2]: https://blog.golang.org/toward-go2 -[survey data]: http://opensourcesurvey.org/2017/ [design proposals]: /contributors/design-proposals @@ -133,8 +116,7 @@ The definition of what constitutes an "enhancement" is a foundational concern for the Kubernetes project. Roughly any Kubernetes user or operator facing enhancement should follow the KEP process: if an enhancement would be described in either written or verbal communication to anyone besides the KEP author or -developer then consider creating a KEP. One concrete example is an enhancement -which should be communicated to SIG Release or SIG PM. +developer then consider creating a KEP. Similarly, any technical effort (refactoring, major architectural change) that will impact a large section of the development community should also be @@ -151,11 +133,16 @@ proposing governance changes. However, as changes start impacting other SIGs or the larger developer community outside of a SIG, the KEP process should be used to coordinate and communicate. -### KEP Template +Enhancements that have major impacts on multiple SIGs should use the KEP process. +A single SIG will own the KEP but it is expected that the set of approvers will span the impacted SIGs. +The KEP process is the way that SIGs can negotiate and communicate changes that cross boundaries. -The template for a KEP is precisely defined in the [template proposal][] +KEPs will also be used to drive large changes that will cut across all parts of the project. +These KEPs will be owned by SIG-architecture and should be seen as a way to communicate the most fundamental aspects of what Kubernetes is. -[template proposal]: https://github.com/kubernetes/community/pull/1124 +### KEP Template + +The template for a KEP is precisely defined [here](https://github.com/kubernetes/community/tree/master/keps/0000-kep-template.md) ### KEP Metadata @@ -180,18 +167,16 @@ Metadata items: KEP filename. See the template for instructions and details. * **status** Required * The current state of the KEP. - * Must be one of `Draft`, `Deferred`, `Approved`, `Rejected`, `Withdrawn`, - `Final`, `Replaced`. + * Must be one of `provisional`, `implementable`, `implemented`, `deferred`, `rejected`, `withdrawn`, or `replaced`. * **authors** Required - * A list of authors for the KEP. We require a name (which can be a pseudonym) - along with a github ID. Other ways to contact the author is strongly - encouraged. This is a list of maps. Subkeys of each item: `name`, - `github`, `email` (optional), `slack` (optional). + * A list of authors for the KEP. + This is simply the github ID. + In the future we may enhance this to include other types of identification. * **owning-sig** Required * The SIG that is most closely associated with this KEP. If there is code or other artifacts that will result from this KEP, then it is expected that this SIG will take responsibility for the bulk of those artifacts. - * SIGs are listed as `sig-abc-def` where the name matches up with the + * Sigs are listed as `sig-abc-def` where the name matches up with the directory in the `kubernetes/community` repo. * **participating-sigs** Optional * A list of SIGs that are involved or impacted by this KEP. @@ -201,8 +186,15 @@ Metadata items: * Reviewer(s) chosen after triage according to proposal process * If not yet chosen replace with `TBD` * Same name/contact scheme as `authors` + * Reviewers should be a distinct set from authors. * **approvers** Required * Approver(s) chosen after triage according to proposal process + * Approver(s) are drawn from the impacted SIGs. + It is up to the individual SIGs to determine how they pick approvers for KEPs impacting them. + The approvers are speaking for the SIG in the process of approving this KEP. + The SIGs in question can modify this list as necessary. + * The approvers are the individuals that make the call to move this KEP to the `approved` state. + * Approvers should be a distinct set from authors. * If not yet chosen replace with `TBD` * Same name/contact scheme as `authors` * **editor** Required @@ -231,106 +223,36 @@ Metadata items: ### KEP Workflow -TODO(jbeda) Rationalize this with status entires in the Metadata above. - -A KEP is proposed to have the following states - -- **opened**: a new KEP has been filed but not triaged by the responsible SIG or - working group -- **accepted**: the motivation has been accepted by the SIG or working group as in - road map -- **scoped**: the design has been approved by the SIG or working group -- **started**: the implementation of the KEP has begun -- **implemented**: the implementation of the KEP is complete -- **deferred**: the KEP has been postponed by the SIG or working group despite - agreement on the motivation -- **superseded**: the KEP has been superseded by another KEP -- **retired**: the implementation of the KEP has been removed -- **rejected**: the KEP has been rejected by the SIG or working group -- **orphaned**: the author or developer of the KEP is no longer willing or able - to complete implementation - -with possible paths through the state space - -- opened -> deferred (a) -- opened -> rejected (b) -- opened -> orphaned (c) -- opened -> accepted -> orphaned (d) -- opened -> accepted -> scoped -> superseded (e) -- opened -> accepted -> scoped -> orphaned (f) -- opened -> accepted -> scoped -> started -> retired (g) -- opened -> accepted -> scoped -> started -> orphaned (h) -- opened -> accepted -> scoped -> started -> superseded (i) -- opened -> accepted -> scoped -> started -> implemented (j) -- opened -> accepted -> scoped -> started -> implemented -> retired (k) - -the happy path is denoted by (j) where an KEP is opened; accepted by a SIG as in -their roadmap; fleshed out with a design; started; and finally implemented. As -Kubernetes continues to mature, hopefully metrics on the utilization of features -will drive decisions on what features to maintain and which to deprecate and so -it is possible that a KEP would be retired if its functionality no longer provides -sufficient value to the community. +A KEP has the following states + +- `provisional`: The KEP has been proposed and is actively being defined. + This is the starting state while the KEP is being fleshed out and actively defined and discussed. + The owning SIG has accepted that this is work that needs to be done. +- `implementable`: The approvers have approved this KEP for implementation. +- `implemented`: The KEP has been implemented and is no longer actively changed. +- `deferred`: The KEP is proposed but not actively being worked on. +- `rejected`: The approvers and authors have decided that this KEP is not moving forward. + The KEP is kept around as a historical document. +- `withdrawn`: The KEP has been withdrawn by the authors. +- `replaced`: The KEP has been replaced by a new KEP. + The `superseded-by` metadata value should point to the new KEP. ### Git and GitHub Implementation -Practically an KEP would be implemented as a pull request to a central repository -with the following example structure - -``` -├── 0000-kep-template.md -├── CODEOWNERS -├── index.md -├── sig-architecture -│   ├── deferred -│   ├── orphaned -│   └── retired -├── sig-network -│   ├── deferred -│   ├── kube-dns -│   ├── orphaned -│   └── retired -├── sig-node -│   ├── deferred -│   ├── kubelet -│   ├── orphaned -│   └── retired -├── sig-release -│   ├── deferred -│   ├── orphaned -│   └── retired -├── sig-storage -│   ├── deferred -│   ├── orphaned -│   └── retired -├── unsorted-to-be-used-by-newcomers-only -└── wg-resource-management - ├── deferred - ├── orphaned - └── retired -``` - -where each SIG or working group is given a top level directory with subprojects -maintained by the SIG listed in sub directories. For newcomers to the community -an `unsorted-to-be-used-by-newcomers-only` directory may be used before an KEP -can be properly routed to a SIG although hopefully if discussion for a potential -KEP begins on the mailing lists proper routing information will be provided to -the KEP author. Additionally a top level index of KEPs may be helpful for people -looking for a complete list of KEPs. There should be basic CI to ensure that an -`index.md` remains up to date. - -Ideally no work would begin within the repositories of the Kubernetes organization -before a KEP has been approved by the responsible SIG or working group. While the -details of how SIGs organize their work is beyond the scope of this proposal one -possibility would be for each charter SIG to create a top level repository within -the Kubernetes org where implementation issues managed by that SIG would be filed. +KEPs are checked into the community repo under the `/kep` directory. +In the future, as needed we can add SIG specific subdirectories. +KEPs in SIG specific subdirectories have limited impact outside of the SIG and can leverage SIG specific OWNERS files. + +New KEPs can be checked in with a file name in the form of `draft-YYYYMMDD-my-title.md`. +As significant work is done on the KEP the authors can assign a KEP number. +This is done by taking the next number in the NEXT_KEP_NUMBER file, incrementing that number, and renaming the KEP. +No other changes should be put in that PR so that it can be approved quickly and minimize merge conflicts. +The KEP number can also be done as part of the initial submission if the PR is likely to be uncontested and merged quickly. ### KEP Editor Role -Taking a cue from the [Python PEP process][], I believe that a group of KEP editors -will be required to make this process successful; the job of an KEP editor is -likely very similar to the [PEP editor responsibilities][] and will hopefully -provide another opportunity for people who do not write code daily to contribute -to Kubernetes. +Taking a cue from the [Python PEP process][], we define the role of a KEP editor. +The job of an KEP editor is likely very similar to the [PEP editor responsibilities][] and will hopefully provide another opportunity for people who do not write code daily to contribute to Kubernetes. In keeping with the PEP editors which @@ -340,8 +262,8 @@ In keeping with the PEP editors which > Edit the PEP for language (spelling, grammar, sentence structure, etc.), markup > (for reST PEPs), code style (examples should match PEP 8 & 7). -KEP editors should generally not pass judgement on a KEP beyond editorial -corrections. +KEP editors should generally not pass judgement on a KEP beyond editorial corrections. +KEP editors can also help inform authors about the process and otherwise help things move smoothly. [Python PEP process]: https://www.python.org/dev/peps/pep-0001/ [PEP editor responsibilities]: https://www.python.org/dev/peps/pep-0001/#pep-editor-responsibilities-workflow @@ -351,7 +273,7 @@ corrections. It is proposed that the primary metrics which would signal the success or failure of the KEP process are -- how many "features" are tracked with a KEP +- how many "enhancements" are tracked with a KEP - distribution of time a KEP spends in each state - KEP rejection rate - PRs referencing a KEP merged per week @@ -364,18 +286,11 @@ failure of the KEP process are ### Prior Art -The KEP process as proposed was essentially stolen from the [Rust RFC process] which +The KEP process as proposed was essentially stolen from the [Rust RFC process][] which itself seems to be very similar to the [Python PEP process][] [Rust RFC process]: https://github.com/rust-lang/rfcs -## Graduation Criteria - -should hit at least the following milestones - -- a release note draft can be generated by referring primarily to KEP content -- a yearly road map is expressed as a KEP - ## Drawbacks Any additional process has the potential to engender resentment within the @@ -444,6 +359,50 @@ and durable storage. ## Unresolved Questions +- How reviewers and approvers are assigned to a KEP +- Example schedule, deadline, and time frame for each stage of a KEP +- Communication/notification mechanisms +- Review meetings and escalation procedure + roadmap][] +- the fact that the [what constitutes a feature][] is still undefined +- [issue management][] +- the difference between an [accepted design and a proposal][] +- [the organization of design proposals][] + +this proposal attempts to place these concerns within a general framework. + +[architectural roadmap]: https://github.com/kubernetes/community/issues/952 +[what constitutes a feature]: https://github.com/kubernetes/community/issues/531 +[issue management]: https://github.com/kubernetes/community/issues/580 +[accepted design and a proposal]: https://github.com/kubernetes/community/issues/914 +[the organization of design proposals]: https://github.com/kubernetes/community/issues/918 + +### Github issues vs. KEPs + +The use of GitHub issues when proposing changes does not provide SIGs good +facilities for signaling approval or rejection of a proposed change to Kubernetes +since anyone can open a GitHub issue at any time. Additionally managing a proposed +change across multiple releases is somewhat cumbersome as labels and milestones +need to be updated for every release that a change spans. These long lived GitHub +issues lead to an ever increasing number of issues open against +`kubernetes/features` which itself has become a management problem. + +In addition to the challenge of managing issues over time, searching for text +within an issue can be challenging. The flat hierarchy of issues can also make +navigation and categorization tricky. While not all community members might +not be comfortable using Git directly, it is imperative that as a community we +work to educate people on a standard set of tools so they can take their +experience to other projects they may decide to work on in the future. While +git is a fantastic version control system (VCS), it is not a project management +tool nor a cogent way of managing an architectural catalog or backlog; this +proposal is limited to motivating the creation of a standardized definition of +work in order to facilitate project management. This primitive for describing +a unit of work may also allow contributors to create their own personalized +view of the state of the project while relying on Git and GitHub for consistency +and durable storage. + +## Unresolved Questions + - How reviewers and approvers are assigned to a KEP - Approval decision process for a KEP - Example schedule, deadline, and time frame for each stage of a KEP diff --git a/docs/imported/community/mentoring.md b/docs/imported/community/mentoring.md index 5d610ce44e38d..a156146db4491 100755 --- a/docs/imported/community/mentoring.md +++ b/docs/imported/community/mentoring.md @@ -1,41 +1,49 @@ --- title: Kubernetes Mentoring Initiatives +notitle: true --- +# Kubernetes Mentoring Initiatives + This folder will be used for all mentoring initiatives for Kubernetes. --- ## Kubernetes Pilots -We understand that everyone has different learning styles and we want to support as many of those as possible. Mentoring is vital to the growth of an individual and organization of every kind. For Kubernetes, the larger the project becomes, it's necessary to keep a continuous pipeline of quality contributors. +We understand that everyone has different learning styles and we want to support as many of those as possible. Mentoring is vital to the growth of an individual and organization of every kind. For Kubernetes, the larger the project becomes, it's necessary to keep a continuous pipeline of quality contributors. *What's a Pilot?* A pilot is a Kubernetes mentor helping new and current members navigate the seas of our repos. ## Current mentoring activities: -All are currently in an incubation phase. Please reach out to Paris Pittman (parispittman@google.com or Paris on Kubernetes slack channel) for more information on how to get involved. The preliminary deck for mentoring strategies is [here](https://docs.google.com/presentation/d/1bRjDEPEn3autWzaEFirbLfHagbZV04Q9kTCalYmnnXw/edit?usp=sharing0). +All are currently in an incubation phase. Please reach out to Paris Pittman (parispittman@google.com or Paris on Kubernetes slack channel) for more information on how to get involved. + +Mentors On Demand +* [Meet Our Contributors](https://github.com/kubernetes/community/tree/master/mentoring/meet-our-contributors.md) + +Long Term Contributor Ladder Growth +* [Group Mentoring Cohorts](https://github.com/kubernetes/community/tree/master/mentoring/group-mentoring.md) -[Contributor Office Hours](https://github.com/kubernetes/community/blob/master/events/office-hours.md) -[Group Mentoring Cohorts](https://github.com/kubernetes/community/tree/master/mentoring/group-mentoring.md) -[Outreachy](https://github.com/kubernetes/community/tree/master/sig-cli/outreachy.md) +Students +* [Outreachy](https://github.com/kubernetes/community/tree/master/sig-cli/outreachy.md) +* [Google Summer of Code](https://github.com/kubernetes/community/tree/master/mentoring/google-summer-of-code.md) #### Inspiration and Thanks This is not an out of the box program but was largely inspired by the following: * [Ada Developer Academy](https://adadevelopersacademy.org/) +* [Apache Mentoring Programme](https://community.apache.org/mentoringprogramme.html) +* [exercism.io](https://github.com/OperationCode/exercism-io-mentoring) * [Google Summer of Code](https://developers.google.com/open-source/gsoc/) -* [exercism.io](https://github.com/OperationCode/exercism-io-mentoring) -* [OpenStack Mentoring](https://wiki.openstack.org/wiki/Mentoring) -* [Apache Mentoring Programme](https://community.apache.org/mentoringprogramme.html) +* [Outreachy](https://www.outreachy.org/) +* [OpenStack Mentoring](https://wiki.openstack.org/wiki/Mentoring) Thanks to: * the many contributors who reviewed and participated in brainstorming, * founding mentees for their willingness to try this out, * founding Pilots (@chrislovecnm, @luxas, @kow3ns) - + We welcome PRs, suggestions, and help! -welcome PRs, suggestions, and help! - the many contributors who reviewed and participated in brainstorming, -* founding mentees for their willingness to try this out, + try this out, * founding Pilots (@chrislovecnm, @luxas, @kow3ns) We welcome PRs, suggestions, and help! From 1f72c7a1b88fb29e3221fa4d3fd144ae5ca0bf00 Mon Sep 17 00:00:00 2001 From: Alex Glikson Date: Mon, 26 Feb 2018 13:47:44 -0500 Subject: [PATCH 029/117] Fixed cross-node preemption example (#7150) * Fixed cross-node preemption example The constraint between P and Q should be pod affinity rather than anti-affinity in order to the example to be correct. Fixes #7149 * Fixed example to emphasize Zone namespace in pod anti-affinity --- .../concepts/configuration/pod-priority-preemption.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/docs/concepts/configuration/pod-priority-preemption.md b/docs/concepts/configuration/pod-priority-preemption.md index 1a85f4593b2c6..01c1c764eda14 100644 --- a/docs/concepts/configuration/pod-priority-preemption.md +++ b/docs/concepts/configuration/pod-priority-preemption.md @@ -220,14 +220,15 @@ can be scheduled on N. P might become feasible on N only if a Pod on another Node is preempted. Here's an example: * Pod P is being considered for Node N. -* Pod Q is running on another Node in the same zone as Node N. -* Pod P has anti-affinity with Pod Q. -* There are no other cases of anti-affinity between Pod P and other Pods in the zone. -* In order to schedule Pod P on Node N, Pod Q should be preempted, but scheduler +* Pod Q is running on another Node in the same Zone as Node N. +* Pod P has Zone-wide anti-affinity with Pod Q +(`topologyKey: failure-domain.beta.kubernetes.io/zone`). +* There are no other cases of anti-affinity between Pod P and other Pods in the Zone. +* In order to schedule Pod P on Node N, Pod Q can be preempted, but scheduler does not perform cross-node preemption. So, Pod P will be deemed unschedulable on Node N. -If Pod Q were removed from its Node, the anti-affinity violation would be gone, +If Pod Q were removed from its Node, the Pod anti-affinity violation would be gone, and Pod P could possibly be scheduled on Node N. We may consider adding cross Node preemption in future versions if we find an From 527911530d81afd3cf1915b355b7258c8781fb7a Mon Sep 17 00:00:00 2001 From: Weibin Lin Date: Tue, 27 Feb 2018 05:37:44 +0800 Subject: [PATCH 030/117] Remove the deprecated '--configure-cbr0=' (#7518) --- docs/getting-started-guides/scratch.md | 33 -------------------------- 1 file changed, 33 deletions(-) diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index c364469eac583..04aad31e76c67 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -405,7 +405,6 @@ Arguments to consider: - `--docker-root=` - `--root-dir=` - `--pod-cidr=` The CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the master. - - `--configure-cbr0=` (described below) - `--register-node` (described in [Node](/docs/admin/node/) documentation.) ### kube-proxy @@ -441,38 +440,6 @@ this `NODE_X_BRIDGE_ADDR`. For example, if `NODE_X_POD_CIDR` is `10.0.0.0/16`, then `NODE_X_BRIDGE_ADDR` is `10.0.0.1/16`. NOTE: this retains the `/16` suffix because of how this is used later. -- Recommended, automatic approach: - - 1. Set `--configure-cbr0=true` option in kubelet init script and restart kubelet service. Kubelet will configure cbr0 automatically. - It will wait to do this until the node controller has set Node.Spec.PodCIDR. Since you have not setup apiserver and node controller - yet, the bridge will not be setup immediately. -- Alternate, manual approach: - - 1. Set `--configure-cbr0=false` on kubelet and restart. - 1. Create a bridge. - - ``` - ip link add name cbr0 type bridge - ``` - - 1. Set appropriate MTU. NOTE: the actual value of MTU will depend on your network environment - - ``` - ip link set dev cbr0 mtu 1460 - ``` - - 1. Add the node's network to the bridge (docker will go on other side of bridge). - - ``` - ip addr add $NODE_X_BRIDGE_ADDR dev cbr0 - ``` - - 1. Turn it on - - ``` - ip link set dev cbr0 up - ``` - If you have turned off Docker's IP masquerading to allow pods to talk to each other, then you may need to do masquerading just for destination IPs outside the cluster network. For example: From 2c5ace18e9f6f398efdaee59440faddcfaa2b847 Mon Sep 17 00:00:00 2001 From: Kris Nova Date: Mon, 26 Feb 2018 21:06:45 -0600 Subject: [PATCH 031/117] Fixing the space in volumes docs (#7532) --- docs/concepts/storage/volumes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index 3c49b03672eef..10bd4c64c8470 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -49,7 +49,7 @@ volume type used. To use a volume, a pod specifies what volumes to provide for the pod (the `spec.volumes` -field) and where to mount those into containers(the +field) and where to mount those into containers (the `spec.containers.volumeMounts` field). From 46d4998e3ec236007c6b353f261144e4bcfc76ea Mon Sep 17 00:00:00 2001 From: Alejandra Bustos Date: Mon, 26 Feb 2018 21:08:46 -0600 Subject: [PATCH 032/117] Add JavaScript client for Kubernetes (#7529) --- docs/reference/client-libraries.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/reference/client-libraries.md b/docs/reference/client-libraries.md index 0db4644822823..01ef8bc7e4bb6 100644 --- a/docs/reference/client-libraries.md +++ b/docs/reference/client-libraries.md @@ -15,7 +15,7 @@ you do not need to implement the API calls and request/response types yourself. You can use a client library for the programming language you are using. Client libraries often handle common tasks such as authentication for you. -Most client libraries can discover and use the Kubernetes Service Account to +Most client libraries can discover and use the Kubernetes Service Account to authenticate if the API client is running inside the Kubernetes cluster, or can understand the [kubeconfig file](/docs/tasks/access-application-cluster/authenticate-across-clusters-kubeconfig/) format to read the credentials and the API Server address. @@ -32,6 +32,8 @@ Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery | Python | [github.com/kubernetes-client/python/](https://github.com/kubernetes-client/python/) | [browse](https://github.com/kubernetes-client/python/tree/master/examples) | Java | [github.com/kubernetes-client/java](https://github.com/kubernetes-client/java/) | [browse](https://github.com/kubernetes-client/java#installation) | dotnet | [github.com/kubernetes-client/csharp](https://github.com/kubernetes-client/csharp) | [browse](https://github.com/kubernetes-client/csharp/tree/master/examples/simple) +| JavaScript | [github.com/kubernetes-client/javascript](https://github.com/kubernetes-client/javascript) | [browse](https://github.com/kubernetes-client/javascript/tree/master/examples) + ## Community-maintained client libraries From 5401750219cb3f08fa2e86e2b2e715fff29cbb83 Mon Sep 17 00:00:00 2001 From: Xiaodong Zhang Date: Wed, 28 Feb 2018 01:51:46 +0800 Subject: [PATCH 033/117] Bump up deployment version in concepts/overview/object-management-kubectl folder (#7327) --- .../declarative-config.md | 61 +++++++++++++------ .../simple_deployment.yaml | 5 +- .../update_deployment.yaml | 5 +- 3 files changed, 52 insertions(+), 19 deletions(-) diff --git a/docs/concepts/overview/object-management-kubectl/declarative-config.md b/docs/concepts/overview/object-management-kubectl/declarative-config.md index facfb4909419a..1772c4874d2d4 100644 --- a/docs/concepts/overview/object-management-kubectl/declarative-config.md +++ b/docs/concepts/overview/object-management-kubectl/declarative-config.md @@ -86,15 +86,19 @@ metadata: # This is the json representation of simple_deployment.yaml # It was written by kubectl apply when the object was created kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"apps/v1beta1","kind":"Deployment", + {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, - "spec":{"minReadySeconds":5,"template":{"metadata":{"labels":{"app":"nginx"}}, + "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, "spec":{"containers":[{"image":"nginx:1.7.9","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: # ... minReadySeconds: 5 + selector: + matchLabels: + # ... + app: nginx template: metadata: # ... @@ -157,15 +161,19 @@ metadata: # This is the json representation of simple_deployment.yaml # It was written by kubectl apply when the object was created kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"apps/v1beta1","kind":"Deployment", + {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, - "spec":{"minReadySeconds":5,"template":{"metadata":{"labels":{"app":"nginx"}}, + "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, "spec":{"containers":[{"image":"nginx:1.7.9","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: # ... minReadySeconds: 5 + selector: + matchLabels: + # ... + app: nginx template: metadata: # ... @@ -201,7 +209,7 @@ The output shows that the `replicas` field has been set to 2, and the `last-appl annotation does not contain a `replicas` field: ``` -apiVersion: apps/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: annotations: @@ -209,9 +217,9 @@ metadata: # note that the annotation does not contain replicas # because it was not updated through apply kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"apps/v1beta1","kind":"Deployment", + {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, - "spec":{"minReadySeconds":5,"template":{"metadata":{"labels":{"app":"nginx"}}, + "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, "spec":{"containers":[{"image":"nginx:1.7.9","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... @@ -219,6 +227,10 @@ spec: replicas: 2 # written by scale # ... minReadySeconds: 5 + selector: + matchLabels: + # ... + app: nginx template: metadata: # ... @@ -261,7 +273,7 @@ The output shows the following changes to the live configuration: - The `last-applied-configuration` annotation no longer contains the `minReadySeconds` field. ```shell -apiVersion: apps/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: annotations: @@ -269,9 +281,9 @@ metadata: # The annotation contains the updated image to nginx 1.11.9, # but does not contain the updated replicas to 2 kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"apps/v1beta1","kind":"Deployment", + {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, - "spec":{"template":{"metadata":{"labels":{"app":"nginx"}}, + "spec":{"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, "spec":{"containers":[{"image":"nginx:1.11.9","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... @@ -279,6 +291,10 @@ spec: replicas: 2 # Set by `kubectl scale`. Ignored by `kubectl apply`. # minReadySeconds cleared by `kubectl apply` # ... + selector: + matchLabels: + # ... + app: nginx template: metadata: # ... @@ -388,7 +404,7 @@ Here's an example. Suppose this is the configuration file for a Deployment objec Also, suppose this is the live configuration for the same Deployment object: ```shell -apiVersion: apps/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: annotations: @@ -396,9 +412,9 @@ metadata: # note that the annotation does not contain replicas # because it was not updated through apply kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"apps/v1beta1","kind":"Deployment", + {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, - "spec":{"minReadySeconds":5,"template":{"metadata":{"labels":{"app":"nginx"}}, + "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, "spec":{"containers":[{"image":"nginx:1.7.9","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... @@ -406,6 +422,10 @@ spec: replicas: 2 # written by scale # ... minReadySeconds: 5 + selector: + matchLabels: + # ... + app: nginx template: metadata: # ... @@ -439,7 +459,7 @@ Here are the merge calculations that would be performed by `kubectl apply`: Here is the live configuration that is the result of the merge: ```shell -apiVersion: apps/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: annotations: @@ -447,13 +467,17 @@ metadata: # The annotation contains the updated image to nginx 1.11.9, # but does not contain the updated replicas to 2 kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"apps/v1beta1","kind":"Deployment", + {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, - "spec":{"template":{"metadata":{"labels":{"app":"nginx"}}, + "spec":{"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, "spec":{"containers":[{"image":"nginx:1.11.9","name":"nginx", "ports":[{"containerPort":80}]}]}}}} # ... spec: + selector: + matchLabels: + # ... + app: nginx replicas: 2 # Set by `kubectl scale`. Ignored by `kubectl apply`. # minReadySeconds cleared by `kubectl apply` # ... @@ -686,10 +710,13 @@ The output shows that the API server set several fields to default values in the configuration. These fields were not specified in the configuration file. ```shell -apiVersion: apps/v1beta1 +apiVersion: apps/v1 kind: Deployment # ... spec: + selector: + matchLabels: + app: nginx minReadySeconds: 5 replicas: 1 # defaulted by apiserver selector: diff --git a/docs/concepts/overview/object-management-kubectl/simple_deployment.yaml b/docs/concepts/overview/object-management-kubectl/simple_deployment.yaml index 9ebff9e78eb9e..10fa1ddf29999 100644 --- a/docs/concepts/overview/object-management-kubectl/simple_deployment.yaml +++ b/docs/concepts/overview/object-management-kubectl/simple_deployment.yaml @@ -1,8 +1,11 @@ -apiVersion: apps/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: + selector: + matchLabels: + app: nginx minReadySeconds: 5 template: metadata: diff --git a/docs/concepts/overview/object-management-kubectl/update_deployment.yaml b/docs/concepts/overview/object-management-kubectl/update_deployment.yaml index 1e2c858ebdd43..d53aa3e6d2fc8 100644 --- a/docs/concepts/overview/object-management-kubectl/update_deployment.yaml +++ b/docs/concepts/overview/object-management-kubectl/update_deployment.yaml @@ -1,8 +1,11 @@ -apiVersion: apps/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: + selector: + matchLabels: + app: nginx template: metadata: labels: From 3121977ed95f72eadbc870aeef227887d272b80f Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Tue, 27 Feb 2018 10:51:46 -0800 Subject: [PATCH 034/117] In concepts, in front matter, change approvers to reviewers. (#7442) --- docs/concepts/api-extension/apiserver-aggregation.md | 2 +- docs/concepts/api-extension/custom-resources.md | 2 +- docs/concepts/architecture/master-node-communication.md | 2 +- docs/concepts/architecture/nodes.md | 2 +- .../cluster-administration/cluster-administration-overview.md | 2 +- docs/concepts/cluster-administration/device-plugins.md | 2 +- .../cluster-administration/kubelet-garbage-collection.md | 2 +- docs/concepts/cluster-administration/logging.md | 2 +- docs/concepts/cluster-administration/manage-deployment.md | 2 +- docs/concepts/cluster-administration/network-plugins.md | 2 +- docs/concepts/cluster-administration/networking.md | 2 +- docs/concepts/cluster-administration/sysctl-cluster.md | 2 +- docs/concepts/configuration/assign-pod-node.md | 2 +- docs/concepts/configuration/overview.md | 2 +- docs/concepts/configuration/pod-priority-preemption.md | 2 +- docs/concepts/configuration/secret.md | 2 +- docs/concepts/configuration/taint-and-toleration.md | 2 +- docs/concepts/containers/container-environment-variables.md | 2 +- docs/concepts/containers/container-lifecycle-hooks.md | 2 +- docs/concepts/containers/images.md | 2 +- docs/concepts/example-concept-template.md | 2 +- docs/concepts/overview/components.md | 2 +- docs/concepts/overview/extending.md | 2 +- docs/concepts/overview/kubernetes-api.md | 2 +- docs/concepts/overview/what-is-kubernetes.md | 2 +- docs/concepts/overview/working-with-objects/labels.md | 2 +- docs/concepts/overview/working-with-objects/names.md | 2 +- docs/concepts/overview/working-with-objects/namespaces.md | 2 +- docs/concepts/policy/pod-security-policy.md | 2 +- docs/concepts/policy/resource-quotas.md | 2 +- docs/concepts/service-catalog/index.md | 2 +- .../add-entries-to-pod-etc-hosts-with-host-aliases.md | 2 +- .../services-networking/connect-applications-service.md | 2 +- docs/concepts/services-networking/dns-pod-service.md | 2 +- docs/concepts/services-networking/ingress.md | 2 +- docs/concepts/services-networking/network-policies.md | 2 +- docs/concepts/services-networking/service.md | 2 +- docs/concepts/storage/dynamic-provisioning.md | 2 +- docs/concepts/storage/persistent-volumes.md | 2 +- docs/concepts/storage/storage-classes.md | 2 +- docs/concepts/storage/volumes.md | 2 +- docs/concepts/workloads/controllers/cron-jobs.md | 2 +- docs/concepts/workloads/controllers/daemonset.md | 2 +- docs/concepts/workloads/controllers/deployment.md | 2 +- docs/concepts/workloads/controllers/jobs-run-to-completion.md | 2 +- docs/concepts/workloads/controllers/replicaset.md | 2 +- docs/concepts/workloads/controllers/replicationcontroller.md | 2 +- docs/concepts/workloads/controllers/statefulset.md | 2 +- docs/concepts/workloads/pods/disruptions.md | 2 +- docs/concepts/workloads/pods/init-containers.md | 2 +- docs/concepts/workloads/pods/pod-overview.md | 2 +- docs/concepts/workloads/pods/pod.md | 2 +- docs/concepts/workloads/pods/podpreset.md | 2 +- 53 files changed, 53 insertions(+), 53 deletions(-) diff --git a/docs/concepts/api-extension/apiserver-aggregation.md b/docs/concepts/api-extension/apiserver-aggregation.md index b246da369b5a1..c36744b345b5a 100644 --- a/docs/concepts/api-extension/apiserver-aggregation.md +++ b/docs/concepts/api-extension/apiserver-aggregation.md @@ -1,6 +1,6 @@ --- title: Extending the Kubernetes API with the aggregation layer -approvers: +reviewers: - lavalamp - cheftako - chenopis diff --git a/docs/concepts/api-extension/custom-resources.md b/docs/concepts/api-extension/custom-resources.md index fe7f06f6034d8..cf1bd67d1fdba 100644 --- a/docs/concepts/api-extension/custom-resources.md +++ b/docs/concepts/api-extension/custom-resources.md @@ -1,6 +1,6 @@ --- title: Custom Resources -approvers: +reviewers: - enisoc - deads2k --- diff --git a/docs/concepts/architecture/master-node-communication.md b/docs/concepts/architecture/master-node-communication.md index ee957de22eace..4a1906ad7d1cc 100644 --- a/docs/concepts/architecture/master-node-communication.md +++ b/docs/concepts/architecture/master-node-communication.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - dchen1107 - roberthbailey - liggitt diff --git a/docs/concepts/architecture/nodes.md b/docs/concepts/architecture/nodes.md index 30fb0fa665532..617c69976167a 100644 --- a/docs/concepts/architecture/nodes.md +++ b/docs/concepts/architecture/nodes.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - caesarxuchao - dchen1107 title: Nodes diff --git a/docs/concepts/cluster-administration/cluster-administration-overview.md b/docs/concepts/cluster-administration/cluster-administration-overview.md index 6af7fce2bf39c..0d176a210f662 100644 --- a/docs/concepts/cluster-administration/cluster-administration-overview.md +++ b/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - davidopp - lavalamp title: Cluster Administration Overview diff --git a/docs/concepts/cluster-administration/device-plugins.md b/docs/concepts/cluster-administration/device-plugins.md index bc3613c07f6d4..432194ec2f15a 100644 --- a/docs/concepts/cluster-administration/device-plugins.md +++ b/docs/concepts/cluster-administration/device-plugins.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: title: Device Plugins description: Use the Kubernetes device plugin framework to implement plugins for GPUs, NICs, FPGAs, InfiniBand, and similar resources that require vendor-specific setup. --- diff --git a/docs/concepts/cluster-administration/kubelet-garbage-collection.md b/docs/concepts/cluster-administration/kubelet-garbage-collection.md index 068ee6bd2ab0c..f65f6e06523ab 100644 --- a/docs/concepts/cluster-administration/kubelet-garbage-collection.md +++ b/docs/concepts/cluster-administration/kubelet-garbage-collection.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - mikedanese title: Configuring kubelet Garbage Collection --- diff --git a/docs/concepts/cluster-administration/logging.md b/docs/concepts/cluster-administration/logging.md index 0efe3031a10d9..77504733ae0ce 100644 --- a/docs/concepts/cluster-administration/logging.md +++ b/docs/concepts/cluster-administration/logging.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - crassirostris - piosz title: Logging Architecture diff --git a/docs/concepts/cluster-administration/manage-deployment.md b/docs/concepts/cluster-administration/manage-deployment.md index e18ae4a2dd6af..4c280f5a4ce5a 100644 --- a/docs/concepts/cluster-administration/manage-deployment.md +++ b/docs/concepts/cluster-administration/manage-deployment.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - bgrant0607 - janetkuo - mikedanese diff --git a/docs/concepts/cluster-administration/network-plugins.md b/docs/concepts/cluster-administration/network-plugins.md index 9d3ec0b89d9e2..67c3605ae99f4 100644 --- a/docs/concepts/cluster-administration/network-plugins.md +++ b/docs/concepts/cluster-administration/network-plugins.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - dcbw - freehan - thockin diff --git a/docs/concepts/cluster-administration/networking.md b/docs/concepts/cluster-administration/networking.md index 8421c9ee71154..7f39a3cbfe229 100644 --- a/docs/concepts/cluster-administration/networking.md +++ b/docs/concepts/cluster-administration/networking.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - thockin title: Cluster Networking --- diff --git a/docs/concepts/cluster-administration/sysctl-cluster.md b/docs/concepts/cluster-administration/sysctl-cluster.md index 100501b37b27b..d79a6abd070c3 100644 --- a/docs/concepts/cluster-administration/sysctl-cluster.md +++ b/docs/concepts/cluster-administration/sysctl-cluster.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - sttts title: Using Sysctls in a Kubernetes Cluster --- diff --git a/docs/concepts/configuration/assign-pod-node.md b/docs/concepts/configuration/assign-pod-node.md index ab4fce8f96c85..b1291efc4508c 100644 --- a/docs/concepts/configuration/assign-pod-node.md +++ b/docs/concepts/configuration/assign-pod-node.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - davidopp - kevin-wangzefeng - bsalamat diff --git a/docs/concepts/configuration/overview.md b/docs/concepts/configuration/overview.md index cde8839ea8669..3322e0b356ceb 100644 --- a/docs/concepts/configuration/overview.md +++ b/docs/concepts/configuration/overview.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - mikedanese title: Configuration Best Practices --- diff --git a/docs/concepts/configuration/pod-priority-preemption.md b/docs/concepts/configuration/pod-priority-preemption.md index 01c1c764eda14..8598d7c2a221b 100644 --- a/docs/concepts/configuration/pod-priority-preemption.md +++ b/docs/concepts/configuration/pod-priority-preemption.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - davidopp - wojtek-t title: Pod Priority and Preemption diff --git a/docs/concepts/configuration/secret.md b/docs/concepts/configuration/secret.md index bb9b6b74701e7..044fa139403f9 100644 --- a/docs/concepts/configuration/secret.md +++ b/docs/concepts/configuration/secret.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - mikedanese title: Secrets --- diff --git a/docs/concepts/configuration/taint-and-toleration.md b/docs/concepts/configuration/taint-and-toleration.md index 0deea64d83231..d6da462792207 100644 --- a/docs/concepts/configuration/taint-and-toleration.md +++ b/docs/concepts/configuration/taint-and-toleration.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - davidopp - kevin-wangzefeng - bsalamat diff --git a/docs/concepts/containers/container-environment-variables.md b/docs/concepts/containers/container-environment-variables.md index 236fc01d4336a..9ed5eb7775cae 100644 --- a/docs/concepts/containers/container-environment-variables.md +++ b/docs/concepts/containers/container-environment-variables.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - mikedanese - thockin title: Container Environment Variables diff --git a/docs/concepts/containers/container-lifecycle-hooks.md b/docs/concepts/containers/container-lifecycle-hooks.md index d1d37d593a69f..b7f90f8f94be0 100644 --- a/docs/concepts/containers/container-lifecycle-hooks.md +++ b/docs/concepts/containers/container-lifecycle-hooks.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - mikedanese - thockin title: Container Lifecycle Hooks diff --git a/docs/concepts/containers/images.md b/docs/concepts/containers/images.md index 20385dd2cf980..38129f2d73b64 100644 --- a/docs/concepts/containers/images.md +++ b/docs/concepts/containers/images.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - erictune - thockin title: Images diff --git a/docs/concepts/example-concept-template.md b/docs/concepts/example-concept-template.md index e939db88a330f..9662f7c73c307 100644 --- a/docs/concepts/example-concept-template.md +++ b/docs/concepts/example-concept-template.md @@ -1,6 +1,6 @@ --- title: Example Concept Template -approvers: +reviewers: - chenopis --- diff --git a/docs/concepts/overview/components.md b/docs/concepts/overview/components.md index 94889ab43f614..f18547aaa0734 100644 --- a/docs/concepts/overview/components.md +++ b/docs/concepts/overview/components.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - lavalamp title: Kubernetes Components --- diff --git a/docs/concepts/overview/extending.md b/docs/concepts/overview/extending.md index 4d27dee318408..e17b40f855ae4 100644 --- a/docs/concepts/overview/extending.md +++ b/docs/concepts/overview/extending.md @@ -1,6 +1,6 @@ --- title: Extending your Kubernetes Cluster -approvers: +reviewers: - erictune - lavalamp - cheftako diff --git a/docs/concepts/overview/kubernetes-api.md b/docs/concepts/overview/kubernetes-api.md index c1f84c2fc22fb..e01e1905f24ee 100644 --- a/docs/concepts/overview/kubernetes-api.md +++ b/docs/concepts/overview/kubernetes-api.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - chenopis title: The Kubernetes API --- diff --git a/docs/concepts/overview/what-is-kubernetes.md b/docs/concepts/overview/what-is-kubernetes.md index f6e6aeef21638..3e84cb279a754 100644 --- a/docs/concepts/overview/what-is-kubernetes.md +++ b/docs/concepts/overview/what-is-kubernetes.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - bgrant0607 - mikedanese title: What is Kubernetes? diff --git a/docs/concepts/overview/working-with-objects/labels.md b/docs/concepts/overview/working-with-objects/labels.md index e5853b920dfe3..c391026c7c7f4 100644 --- a/docs/concepts/overview/working-with-objects/labels.md +++ b/docs/concepts/overview/working-with-objects/labels.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - mikedanese title: Labels and Selectors --- diff --git a/docs/concepts/overview/working-with-objects/names.md b/docs/concepts/overview/working-with-objects/names.md index bc5be6422240e..f2aa7fb1125b2 100644 --- a/docs/concepts/overview/working-with-objects/names.md +++ b/docs/concepts/overview/working-with-objects/names.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - mikedanese - thockin title: Names diff --git a/docs/concepts/overview/working-with-objects/namespaces.md b/docs/concepts/overview/working-with-objects/namespaces.md index 3d85256dfd8bf..b7e5329ba5da0 100644 --- a/docs/concepts/overview/working-with-objects/namespaces.md +++ b/docs/concepts/overview/working-with-objects/namespaces.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - derekwaynecarr - mikedanese - thockin diff --git a/docs/concepts/policy/pod-security-policy.md b/docs/concepts/policy/pod-security-policy.md index 9a08e5242f6d7..24d7c7dd6e379 100644 --- a/docs/concepts/policy/pod-security-policy.md +++ b/docs/concepts/policy/pod-security-policy.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - pweil- - tallclair title: Pod Security Policies diff --git a/docs/concepts/policy/resource-quotas.md b/docs/concepts/policy/resource-quotas.md index fa12317377e3c..9253d3d602ba1 100644 --- a/docs/concepts/policy/resource-quotas.md +++ b/docs/concepts/policy/resource-quotas.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - derekwaynecarr title: Resource Quotas --- diff --git a/docs/concepts/service-catalog/index.md b/docs/concepts/service-catalog/index.md index 8b4f6a11e55bf..ed39b3b53471b 100644 --- a/docs/concepts/service-catalog/index.md +++ b/docs/concepts/service-catalog/index.md @@ -1,6 +1,6 @@ --- title: Service Catalog -approvers: +reviewers: - chenopis --- diff --git a/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md b/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md index e1a1e1c8ed3d7..4c1f9de481dbc 100644 --- a/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md +++ b/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - rickypai - thockin title: Adding entries to Pod /etc/hosts with HostAliases diff --git a/docs/concepts/services-networking/connect-applications-service.md b/docs/concepts/services-networking/connect-applications-service.md index 7638cb99f9474..5587498f2b574 100644 --- a/docs/concepts/services-networking/connect-applications-service.md +++ b/docs/concepts/services-networking/connect-applications-service.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - caesarxuchao - lavalamp - thockin diff --git a/docs/concepts/services-networking/dns-pod-service.md b/docs/concepts/services-networking/dns-pod-service.md index b5e626c1b7035..321b43ca0c0ca 100644 --- a/docs/concepts/services-networking/dns-pod-service.md +++ b/docs/concepts/services-networking/dns-pod-service.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - davidopp - thockin title: DNS for Services and Pods diff --git a/docs/concepts/services-networking/ingress.md b/docs/concepts/services-networking/ingress.md index 38c189e4f2263..8715d3ecfeacc 100644 --- a/docs/concepts/services-networking/ingress.md +++ b/docs/concepts/services-networking/ingress.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - bprashanth title: Ingress --- diff --git a/docs/concepts/services-networking/network-policies.md b/docs/concepts/services-networking/network-policies.md index a3840c24f23a9..5874351e0948a 100644 --- a/docs/concepts/services-networking/network-policies.md +++ b/docs/concepts/services-networking/network-policies.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - thockin - caseydavenport - danwinship diff --git a/docs/concepts/services-networking/service.md b/docs/concepts/services-networking/service.md index 192272cba6bdd..236d43e8d9fba 100644 --- a/docs/concepts/services-networking/service.md +++ b/docs/concepts/services-networking/service.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - bprashanth title: Services --- diff --git a/docs/concepts/storage/dynamic-provisioning.md b/docs/concepts/storage/dynamic-provisioning.md index 3370dc9da8fad..517574f793966 100644 --- a/docs/concepts/storage/dynamic-provisioning.md +++ b/docs/concepts/storage/dynamic-provisioning.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - saad-ali title: Dynamic Volume Provisioning --- diff --git a/docs/concepts/storage/persistent-volumes.md b/docs/concepts/storage/persistent-volumes.md index 97fd66e6e07c5..39b0ce59aede0 100644 --- a/docs/concepts/storage/persistent-volumes.md +++ b/docs/concepts/storage/persistent-volumes.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - jsafrane - mikedanese - saad-ali diff --git a/docs/concepts/storage/storage-classes.md b/docs/concepts/storage/storage-classes.md index 2dc10e5e1d531..6353fdb0beddb 100644 --- a/docs/concepts/storage/storage-classes.md +++ b/docs/concepts/storage/storage-classes.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - jsafrane - mikedanese - saad-ali diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index 10bd4c64c8470..f697c04f9e411 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - jsafrane - mikedanese - saad-ali diff --git a/docs/concepts/workloads/controllers/cron-jobs.md b/docs/concepts/workloads/controllers/cron-jobs.md index ae820d4648fa6..26620c55c7ba2 100644 --- a/docs/concepts/workloads/controllers/cron-jobs.md +++ b/docs/concepts/workloads/controllers/cron-jobs.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - erictune - soltysh - janetkuo diff --git a/docs/concepts/workloads/controllers/daemonset.md b/docs/concepts/workloads/controllers/daemonset.md index f6737ca743313..e2c00863b89a9 100644 --- a/docs/concepts/workloads/controllers/daemonset.md +++ b/docs/concepts/workloads/controllers/daemonset.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - enisoc - erictune - foxish diff --git a/docs/concepts/workloads/controllers/deployment.md b/docs/concepts/workloads/controllers/deployment.md index 1482f673f984e..1e5003f0d95b2 100644 --- a/docs/concepts/workloads/controllers/deployment.md +++ b/docs/concepts/workloads/controllers/deployment.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - bgrant0607 - janetkuo title: Deployments diff --git a/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/docs/concepts/workloads/controllers/jobs-run-to-completion.md index 08324bb86e838..fb68ece05c2b9 100644 --- a/docs/concepts/workloads/controllers/jobs-run-to-completion.md +++ b/docs/concepts/workloads/controllers/jobs-run-to-completion.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - erictune - soltysh title: Jobs - Run to Completion diff --git a/docs/concepts/workloads/controllers/replicaset.md b/docs/concepts/workloads/controllers/replicaset.md index 750346e7b07b4..72c59b7abcbcf 100644 --- a/docs/concepts/workloads/controllers/replicaset.md +++ b/docs/concepts/workloads/controllers/replicaset.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - Kashomon - bprashanth - madhusudancs diff --git a/docs/concepts/workloads/controllers/replicationcontroller.md b/docs/concepts/workloads/controllers/replicationcontroller.md index 792d2d76bca91..6074550977800 100644 --- a/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/docs/concepts/workloads/controllers/replicationcontroller.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - bprashanth - janetkuo title: ReplicationController diff --git a/docs/concepts/workloads/controllers/statefulset.md b/docs/concepts/workloads/controllers/statefulset.md index 4bf49e7a70cbe..fe97e79f8e33e 100644 --- a/docs/concepts/workloads/controllers/statefulset.md +++ b/docs/concepts/workloads/controllers/statefulset.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - enisoc - erictune - foxish diff --git a/docs/concepts/workloads/pods/disruptions.md b/docs/concepts/workloads/pods/disruptions.md index ae8f6df580843..151788f29d564 100644 --- a/docs/concepts/workloads/pods/disruptions.md +++ b/docs/concepts/workloads/pods/disruptions.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - erictune - foxish - davidopp diff --git a/docs/concepts/workloads/pods/init-containers.md b/docs/concepts/workloads/pods/init-containers.md index 355620ad127c5..1d0f123ffb4f9 100644 --- a/docs/concepts/workloads/pods/init-containers.md +++ b/docs/concepts/workloads/pods/init-containers.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - erictune title: Init Containers --- diff --git a/docs/concepts/workloads/pods/pod-overview.md b/docs/concepts/workloads/pods/pod-overview.md index fefdfc66b5dbd..3302e1fad8db8 100644 --- a/docs/concepts/workloads/pods/pod-overview.md +++ b/docs/concepts/workloads/pods/pod-overview.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - erictune title: Pod Overview --- diff --git a/docs/concepts/workloads/pods/pod.md b/docs/concepts/workloads/pods/pod.md index 842fc5b8a1f2d..bb4e05ca65dd2 100644 --- a/docs/concepts/workloads/pods/pod.md +++ b/docs/concepts/workloads/pods/pod.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: title: Pods --- diff --git a/docs/concepts/workloads/pods/podpreset.md b/docs/concepts/workloads/pods/podpreset.md index 316fe8b6dab8a..0152e8a566810 100644 --- a/docs/concepts/workloads/pods/podpreset.md +++ b/docs/concepts/workloads/pods/podpreset.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - jessfraz title: Pod Preset --- From 06f94d3983fde2ec7521a1f73c39b826efeb8f31 Mon Sep 17 00:00:00 2001 From: Aravind Date: Wed, 28 Feb 2018 00:22:46 +0530 Subject: [PATCH 035/117] Debugging DNS has been moved to a separate article. (#7436) * Separated Customizing DNS service Debugging DNS from https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/ * fixed a typo --- Gemfile.lock | 4 +- _data/tasks.yml | 1 + .../dns-custom-nameservers.md | 178 +--------------- .../dns-debugging-resolution.md | 197 ++++++++++++++++++ 4 files changed, 202 insertions(+), 178 deletions(-) create mode 100644 docs/tasks/administer-cluster/dns-debugging-resolution.md diff --git a/Gemfile.lock b/Gemfile.lock index 577f4e69b245b..4af0fc40657e8 100644 --- a/Gemfile.lock +++ b/Gemfile.lock @@ -47,7 +47,7 @@ GEM jekyll-paginate (1.1.0) jekyll-readme-index (0.0.3) jekyll (~> 3.0) - jekyll-redirect-from (0.12.1) + jekyll-redirect-from (0.13.0) jekyll (~> 3.3) jekyll-relative-links (0.5.1) jekyll (~> 3.3) @@ -145,7 +145,7 @@ DEPENDENCIES jekyll-optional-front-matter (~> 0.1) jekyll-paginate (= 1.1.0) jekyll-readme-index (= 0.0.3) - jekyll-redirect-from (~> 0.11) + jekyll-redirect-from (~> 0.13) jekyll-relative-links (~> 0.2) jekyll-seo-tag jekyll-sitemap diff --git a/_data/tasks.yml b/_data/tasks.yml index f23ba4b59578b..9fc290e8e1a88 100644 --- a/_data/tasks.yml +++ b/_data/tasks.yml @@ -180,6 +180,7 @@ toc: - docs/tasks/administer-cluster/configure-multiple-schedulers.md - docs/tasks/administer-cluster/ip-masq-agent.md - docs/tasks/administer-cluster/dns-custom-nameservers.md + - docs/tasks/administer-cluster/dns-debugging-resolution.md - docs/tasks/administer-cluster/pvc-protection.md - title: Federation - Run an App on Multiple Clusters diff --git a/docs/tasks/administer-cluster/dns-custom-nameservers.md b/docs/tasks/administer-cluster/dns-custom-nameservers.md index 85903915bf152..b60f05794e639 100644 --- a/docs/tasks/administer-cluster/dns-custom-nameservers.md +++ b/docs/tasks/administer-cluster/dns-custom-nameservers.md @@ -7,7 +7,7 @@ title: Customizing DNS Service {% capture overview %} This page provides hints on configuring DNS Pod and guidance on customizing the -DNS resolution process and diagnosing DNS problems. +DNS resolution process. {% endcapture %} {% capture prerequisites %} @@ -183,182 +183,8 @@ data: ["172.16.0.1"] ``` -## Debugging DNS resolution - -### Create a simple Pod to use as a test environment - -Create a file named busybox.yaml with the following contents: - -{% include code.html language="yaml" file="busybox.yaml" ghlink="/docs/tasks/administer-cluster/busybox.yaml" %} - -Then create a pod using this file and verify its status: - -```shell -$ kubectl create -f busybox.yaml -pod "busybox" created - -$ kubectl get pods busybox -NAME READY STATUS RESTARTS AGE -busybox 1/1 Running 0 -``` - -Once that pod is running, you can exec `nslookup` in that environment. -If you see something like the following, DNS is working correctly. - -```shell -$ kubectl exec -ti busybox -- nslookup kubernetes.default -Server: 10.0.0.10 -Address 1: 10.0.0.10 - -Name: kubernetes.default -Address 1: 10.0.0.1 -``` - -If the `nslookup` command fails, check the following: - -### Check the local DNS configuration first - -Take a look inside the resolv.conf file. -(See [Inheriting DNS from the node](#inheriting-dns-from-the-node) and -[Known issues](#known-issues) below for more information) - -```shell -$ kubectl exec busybox cat /etc/resolv.conf -``` - -Verify that the search path and name server are set up like the following -(note that search path may vary for different cloud providers): - -``` -search default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal -nameserver 10.0.0.10 -options ndots:5 -``` - -Errors such as the following indicate a problem with the kube-dns add-on or -associated Services: - -``` -$ kubectl exec -ti busybox -- nslookup kubernetes.default -Server: 10.0.0.10 -Address 1: 10.0.0.10 - -nslookup: can't resolve 'kubernetes.default' -``` - -or - -``` -$ kubectl exec -ti busybox -- nslookup kubernetes.default -Server: 10.0.0.10 -Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local - -nslookup: can't resolve 'kubernetes.default' -``` - -### Check if the DNS pod is running - -Use the `kubectl get pods` command to verify that the DNS pod is running. - -```shell -$ kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -NAME READY STATUS RESTARTS AGE -... -kube-dns-v19-ezo1y 3/3 Running 0 1h -... -``` - -If you see that no pod is running or that the pod has failed/completed, the DNS -add-on may not be deployed by default in your current environment and you will -have to deploy it manually. - -### Check for Errors in the DNS pod - -Use `kubectl logs` command to see logs for the DNS daemons. - -```shell -$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns -$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq -$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar -``` - -See if there is any suspicious log. Letter '`W`', '`E`', '`F`' at the beginning -represent Warning, Error and Failure. Please search for entries that have these -as the logging level and use -[kubernetes issues](https://github.com/kubernetes/kubernetes/issues) -to report unexpected errors. - -### Is DNS service up? - -Verify that the DNS service is up by using the `kubectl get service` command. - -```shell -$ kubectl get svc --namespace=kube-system -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -... -kube-dns 10.0.0.10 53/UDP,53/TCP 1h -... -``` - -If you have created the service or in the case it should be created by default -but it does not appear, see -[debugging services](/docs/tasks/debug-application-cluster/debug-service/) for -more information. - -### Are DNS endpoints exposed? - -You can verify that DNS endpoints are exposed by using the `kubectl get endpoints` -command. - -```shell -$ kubectl get ep kube-dns --namespace=kube-system -NAME ENDPOINTS AGE -kube-dns 10.180.3.17:53,10.180.3.17:53 1h -``` - -If you do not see the endpoints, see endpoints section in the -[debugging services](/docs/tasks/debug-application-cluster/debug-service/) documentation. - -For additional Kubernetes DNS examples, see the -[cluster-dns examples](https://github.com/kubernetes/examples/tree/master/staging/cluster-dns) -in the Kubernetes GitHub repository. - -## Known issues - -Kubernetes installs do not configure the nodes' resolv.conf files to use the -cluster DNS by default, because that process is inherently distro-specific. -This should probably be implemented eventually. - -Linux's libc is impossibly stuck ([see this bug from -2005](https://bugzilla.redhat.com/show_bug.cgi?id=168253)) with limits of just -3 DNS `nameserver` records and 6 DNS `search` records. Kubernetes needs to -consume 1 `nameserver` record and 3 `search` records. This means that if a -local installation already uses 3 `nameserver`s or uses more than 3 `search`es, -some of those settings will be lost. As a partial workaround, the node can run -`dnsmasq` which will provide more `nameserver` entries, but not more `search` -entries. You can also use kubelet's `--resolv-conf` flag. - -If you are using Alpine version 3.3 or earlier as your base image, DNS may not -work properly owing to a known issue with Alpine. -Check [here](https://github.com/kubernetes/kubernetes/issues/30215) -for more information. - -## Kubernetes Federation (Multiple Zone support) - -Release 1.3 introduced Cluster Federation support for multi-site Kubernetes -installations. This required some minor (backward-compatible) changes to the -way the Kubernetes cluster DNS server processes DNS queries, to facilitate -the lookup of federated services (which span multiple Kubernetes clusters). -See the [Cluster Federation Administrators' Guide](/docs/concepts/cluster-administration/federation/) -for more details on Cluster Federation and multi-site support. - -## References - -- [DNS for Services and Pods](/docs/concepts/services-networking/dns-pod-service/) -- [Docs for the DNS cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md) - ## What's next -- [Autoscaling the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/). +- [Debugging DNS Resolution](/docs/tasks/administer-cluster/dns-debugging-resolution/). {% endcapture %} diff --git a/docs/tasks/administer-cluster/dns-debugging-resolution.md b/docs/tasks/administer-cluster/dns-debugging-resolution.md new file mode 100644 index 0000000000000..e77db838ec7d5 --- /dev/null +++ b/docs/tasks/administer-cluster/dns-debugging-resolution.md @@ -0,0 +1,197 @@ +--- +approvers: +- bowei +- zihongz +title: Debugging DNS Resolution +--- + +{% capture overview %} +This page provides hints on diagnosing DNS problems. +{% endcapture %} + +{% capture prerequisites %} +* {% include task-tutorial-prereqs.md %} +* Kubernetes version 1.6 and above. +* The cluster must be configured to use the `kube-dns` addon. +{% endcapture %} + +{% capture steps %} + +### Create a simple Pod to use as a test environment + +Create a file named busybox.yaml with the following contents: + +{% include code.html language="yaml" file="busybox.yaml" ghlink="/docs/tasks/administer-cluster/busybox.yaml" %} + +Then create a pod using this file and verify its status: + +```shell +$ kubectl create -f busybox.yaml +pod "busybox" created + +$ kubectl get pods busybox +NAME READY STATUS RESTARTS AGE +busybox 1/1 Running 0 +``` + +Once that pod is running, you can exec `nslookup` in that environment. +If you see something like the following, DNS is working correctly. + +```shell +$ kubectl exec -ti busybox -- nslookup kubernetes.default +Server: 10.0.0.10 +Address 1: 10.0.0.10 + +Name: kubernetes.default +Address 1: 10.0.0.1 +``` + +If the `nslookup` command fails, check the following: + +### Check the local DNS configuration first + +Take a look inside the resolv.conf file. +(See [Inheriting DNS from the node](#inheriting-dns-from-the-node) and +[Known issues](#known-issues) below for more information) + +```shell +$ kubectl exec busybox cat /etc/resolv.conf +``` + +Verify that the search path and name server are set up like the following +(note that search path may vary for different cloud providers): + +``` +search default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal +nameserver 10.0.0.10 +options ndots:5 +``` + +Errors such as the following indicate a problem with the kube-dns add-on or +associated Services: + +``` +$ kubectl exec -ti busybox -- nslookup kubernetes.default +Server: 10.0.0.10 +Address 1: 10.0.0.10 + +nslookup: can't resolve 'kubernetes.default' +``` + +or + +``` +$ kubectl exec -ti busybox -- nslookup kubernetes.default +Server: 10.0.0.10 +Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local + +nslookup: can't resolve 'kubernetes.default' +``` + +### Check if the DNS pod is running + +Use the `kubectl get pods` command to verify that the DNS pod is running. + +```shell +$ kubectl get pods --namespace=kube-system -l k8s-app=kube-dns +NAME READY STATUS RESTARTS AGE +... +kube-dns-v19-ezo1y 3/3 Running 0 1h +... +``` + +If you see that no pod is running or that the pod has failed/completed, the DNS +add-on may not be deployed by default in your current environment and you will +have to deploy it manually. + +### Check for Errors in the DNS pod + +Use `kubectl logs` command to see logs for the DNS daemons. + +```shell +$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns +$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq +$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar +``` + +See if there is any suspicious log. Letter '`W`', '`E`', '`F`' at the beginning +represent Warning, Error and Failure. Please search for entries that have these +as the logging level and use +[kubernetes issues](https://github.com/kubernetes/kubernetes/issues) +to report unexpected errors. + +### Is DNS service up? + +Verify that the DNS service is up by using the `kubectl get service` command. + +```shell +$ kubectl get svc --namespace=kube-system +NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE +... +kube-dns 10.0.0.10 53/UDP,53/TCP 1h +... +``` + +If you have created the service or in the case it should be created by default +but it does not appear, see +[debugging services](/docs/tasks/debug-application-cluster/debug-service/) for +more information. + +### Are DNS endpoints exposed? + +You can verify that DNS endpoints are exposed by using the `kubectl get endpoints` +command. + +```shell +$ kubectl get ep kube-dns --namespace=kube-system +NAME ENDPOINTS AGE +kube-dns 10.180.3.17:53,10.180.3.17:53 1h +``` + +If you do not see the endpoints, see endpoints section in the +[debugging services](/docs/tasks/debug-application-cluster/debug-service/) documentation. + +For additional Kubernetes DNS examples, see the +[cluster-dns examples](https://github.com/kubernetes/examples/tree/master/staging/cluster-dns) +in the Kubernetes GitHub repository. + +## Known issues + +Kubernetes installs do not configure the nodes' resolv.conf files to use the +cluster DNS by default, because that process is inherently distro-specific. +This should probably be implemented eventually. + +Linux's libc is impossibly stuck ([see this bug from +2005](https://bugzilla.redhat.com/show_bug.cgi?id=168253)) with limits of just +3 DNS `nameserver` records and 6 DNS `search` records. Kubernetes needs to +consume 1 `nameserver` record and 3 `search` records. This means that if a +local installation already uses 3 `nameserver`s or uses more than 3 `search`es, +some of those settings will be lost. As a partial workaround, the node can run +`dnsmasq` which will provide more `nameserver` entries, but not more `search` +entries. You can also use kubelet's `--resolv-conf` flag. + +If you are using Alpine version 3.3 or earlier as your base image, DNS may not +work properly owing to a known issue with Alpine. +Check [here](https://github.com/kubernetes/kubernetes/issues/30215) +for more information. + +## Kubernetes Federation (Multiple Zone support) + +Release 1.3 introduced Cluster Federation support for multi-site Kubernetes +installations. This required some minor (backward-compatible) changes to the +way the Kubernetes cluster DNS server processes DNS queries, to facilitate +the lookup of federated services (which span multiple Kubernetes clusters). +See the [Cluster Federation Administrators' Guide](/docs/concepts/cluster-administration/federation/) +for more details on Cluster Federation and multi-site support. + +## References + +- [DNS for Services and Pods](/docs/concepts/services-networking/dns-pod-service/) +- [Docs for the DNS cluster addon](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/dns/README.md) + +## What's next +- [Autoscaling the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/). + +{% endcapture %} + +{% include templates/task.md %} \ No newline at end of file From 45c3fe6ddcafdbf30e4495621aafeca2e46ac7fc Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Tue, 27 Feb 2018 10:53:45 -0800 Subject: [PATCH 036/117] In Setup, in front matter, change approvers to reviewers. (#7443) --- docs/setup/building-from-source.md | 2 +- docs/setup/independent/create-cluster-kubeadm.md | 2 +- docs/setup/independent/high-availability.md | 2 +- docs/setup/index.md | 2 +- docs/setup/pick-right-solution.md | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/setup/building-from-source.md b/docs/setup/building-from-source.md index e146791e3b967..e541d74480c90 100644 --- a/docs/setup/building-from-source.md +++ b/docs/setup/building-from-source.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - david-mcmahon - jbeda title: Building from Source diff --git a/docs/setup/independent/create-cluster-kubeadm.md b/docs/setup/independent/create-cluster-kubeadm.md index f24d42e8ac168..342aeabcdd774 100644 --- a/docs/setup/independent/create-cluster-kubeadm.md +++ b/docs/setup/independent/create-cluster-kubeadm.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - mikedanese - luxas - errordeveloper diff --git a/docs/setup/independent/high-availability.md b/docs/setup/independent/high-availability.md index 9985e13c3a9fb..3d17d15538edc 100644 --- a/docs/setup/independent/high-availability.md +++ b/docs/setup/independent/high-availability.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - mikedanese - luxas - errordeveloper diff --git a/docs/setup/index.md b/docs/setup/index.md index 6266fa283dcce..5ceaf1667beeb 100644 --- a/docs/setup/index.md +++ b/docs/setup/index.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - brendandburns - erictune - mikedanese diff --git a/docs/setup/pick-right-solution.md b/docs/setup/pick-right-solution.md index 5df752ea5b994..10b69f75ec1e0 100644 --- a/docs/setup/pick-right-solution.md +++ b/docs/setup/pick-right-solution.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - brendandburns - erictune - mikedanese From f81f1c0591e817cecd50e88029d8394e557b2c48 Mon Sep 17 00:00:00 2001 From: Steve Perry Date: Tue, 27 Feb 2018 10:54:45 -0800 Subject: [PATCH 037/117] In user-guide, in front matter, change approvers to reviewers. (#7444) --- docs/user-guide/update-demo/index.md.orig | 2 +- docs/user-guide/walkthrough/index.md | 2 +- docs/user-guide/walkthrough/k8s201.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/user-guide/update-demo/index.md.orig b/docs/user-guide/update-demo/index.md.orig index 50679b8b69b84..7ceb8aa5af941 100644 --- a/docs/user-guide/update-demo/index.md.orig +++ b/docs/user-guide/update-demo/index.md.orig @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - mikedanese title: Rolling Update Demo --- diff --git a/docs/user-guide/walkthrough/index.md b/docs/user-guide/walkthrough/index.md index 20fd482271e76..a2123d54fb858 100644 --- a/docs/user-guide/walkthrough/index.md +++ b/docs/user-guide/walkthrough/index.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - eparis - mikedanese title: Kubernetes 101 diff --git a/docs/user-guide/walkthrough/k8s201.md b/docs/user-guide/walkthrough/k8s201.md index 2be84046ff885..8bf5f9a2dc881 100644 --- a/docs/user-guide/walkthrough/k8s201.md +++ b/docs/user-guide/walkthrough/k8s201.md @@ -1,5 +1,5 @@ --- -approvers: +reviewers: - janetkuo - mikedanese title: Kubernetes 201 From 8ed888a76535c674730ec6e8fb1c7aa71e6e0393 Mon Sep 17 00:00:00 2001 From: Mike Wilson Date: Tue, 27 Feb 2018 13:56:46 -0500 Subject: [PATCH 038/117] Updated to reflect network offerings (#7530) Juju now supports canal and flannel. Adding docs to indicate that. --- .../ubuntu/networking.md | 31 ++++++++++++------- 1 file changed, 20 insertions(+), 11 deletions(-) diff --git a/docs/getting-started-guides/ubuntu/networking.md b/docs/getting-started-guides/ubuntu/networking.md index 5b5d84e16e96a..d797720b90387 100644 --- a/docs/getting-started-guides/ubuntu/networking.md +++ b/docs/getting-started-guides/ubuntu/networking.md @@ -5,34 +5,43 @@ title: Networking {% capture overview %} Kubernetes supports the [Container Network Interface (CNI)](https://github.com/containernetworking/cni). This is a network plugin architecture that allows you to use whatever -Kubernetes-friendly SDN you want. Currently this means support for Flannel. +Kubernetes-friendly SDN you want. Currently this means support for Flannel and Canal. -This page shows how to the various network portions of a cluster work, and how to configure them. +This page shows how the various network portions of a cluster work and how to configure them. {% endcapture %} {% capture prerequisites %} This page assumes you have a working Juju deployed cluster. + +**Note:** Note that if you deploy a cluster via conjure-up or the CDK bundles, manually deploying CNI plugins is unnecessary. +{: .note} {% endcapture %} {% capture steps %} -## Flannel +The CNI charms are [subordinates](https://jujucharms.com/docs/stable/authors-subordinate-applications). +These charms will require a principal charm that implements the `kubernetes-cni` interface in order to properly deploy. -The flannel charm is a -[subordinate](https://jujucharms.com/docs/stable/authors-subordinate-applications). -This charm will require a principal charm that implements the `kubernetes-cni` -interface in order to properly deploy. +## Flannel ``` juju deploy flannel -juju deploy etcd -juju deploy kubernetes-master juju add-relation flannel kubernetes-master +juju add-relation flannel kubernetes-worker juju add-relation flannel etcd ``` +## Canal + +``` +juju deploy canal +juju add-relation canal kubernetes-master +juju add-relation canal kubernetes-worker +juju add-relation canal etcd +``` + ### Configuration -**iface** The interface to configure the flannel SDN binding. If this value is +**iface** The interface to configure the flannel or canal SDN binding. If this value is empty string or undefined the code will attempt to find the default network adapter similar to the following command: @@ -40,7 +49,7 @@ adapter similar to the following command: $ route | grep default | head -n 1 | awk {'print $8'} ``` -**cidr** The network range to configure the flannel SDN to declare when +**cidr** The network range to configure the flannel or canal SDN to declare when establishing networking setup with etcd. Ensure this network range is not active on layers 2/3 you're deploying to, as it will cause collisions and odd behavior if care is not taken when selecting a good CIDR range to assign to flannel. It's From 8b916c38b455f7bd1902a4bdd8073a78a5110872 Mon Sep 17 00:00:00 2001 From: Mike Wilson Date: Tue, 27 Feb 2018 13:57:46 -0500 Subject: [PATCH 039/117] Update validation (#7550) Adding a line about e2e tests passing on a CDK cluster. --- docs/getting-started-guides/ubuntu/validation.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/getting-started-guides/ubuntu/validation.md b/docs/getting-started-guides/ubuntu/validation.md index b842db82c7d54..0fccf58967aba 100644 --- a/docs/getting-started-guides/ubuntu/validation.md +++ b/docs/getting-started-guides/ubuntu/validation.md @@ -25,6 +25,8 @@ The primary objectives of the e2e tests are to ensure a consistent and reliable behavior of the kubernetes code base, and to catch hard-to-test bugs before users do, when unit and integration tests are insufficient. +End-to-end tests will pass on a properly running CDK cluster outside of bugs in the tests. + ### Deploy kubernetes-e2e charm To deploy the end-to-end test suite, you need to relate the `kubernetes-e2e` charm From 41dfa4241e657e33293df64fbd6bf08f71948b48 Mon Sep 17 00:00:00 2001 From: Mike Wilson Date: Tue, 27 Feb 2018 13:58:45 -0500 Subject: [PATCH 040/117] Updating index (#7549) Adding information to reflect that CDK tracks upstream and doesn't alter binaries. --- docs/getting-started-guides/ubuntu/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/getting-started-guides/ubuntu/index.md b/docs/getting-started-guides/ubuntu/index.md index 8bec8f928427f..b9d23187d0b42 100644 --- a/docs/getting-started-guides/ubuntu/index.md +++ b/docs/getting-started-guides/ubuntu/index.md @@ -11,7 +11,7 @@ There are multiple ways to run a Kubernetes cluster with Ubuntu. These pages exp - [The Canonical Distribution of Kubernetes](https://www.ubuntu.com/cloud/kubernetes) -Supports AWS, GCE, Azure, Joyent, OpenStack, VMWare, Bare Metal and localhost deployments. +The latest version of Kubernetes with upstream binaries. Supports AWS, GCE, Azure, Joyent, OpenStack, VMWare, Bare Metal and localhost deployments. ### Quick Start From 7e07a410b0457d455e7eb1fdcf5153b52dcb776e Mon Sep 17 00:00:00 2001 From: William Zhang Date: Wed, 28 Feb 2018 03:07:45 +0800 Subject: [PATCH 041/117] Fix the wrong link to pod-priority-preemption (#7545) Signed-off-by: William Zhang --- .../guaranteed-scheduling-critical-addon-pods.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md b/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md index dc8a102c626d7..960c31a23e0bf 100644 --- a/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md +++ b/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods.md @@ -23,7 +23,7 @@ vacated by the evicted critical add-on pod or the amount of resources available accordance with the [deprecation policy](/docs/reference/deprecation-policy) for beta features.** **To avoid eviction of critical pods, you must -[enable priorities in scheduler](docs/concepts/configuration/pod-priority-preemption/) +[enable priorities in scheduler](/docs/concepts/configuration/pod-priority-preemption/) before upgrading to Kubernetes 1.10 or higher.** Rescheduler ensures that critical add-ons are always scheduled From 2fe0ffc2a881b56690a08941c00e0a54b1dfd7d9 Mon Sep 17 00:00:00 2001 From: Weibin Lin Date: Wed, 28 Feb 2018 03:10:46 +0800 Subject: [PATCH 042/117] Update bootstrap-tokens.md (#7543) --- docs/admin/bootstrap-tokens.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/admin/bootstrap-tokens.md b/docs/admin/bootstrap-tokens.md index 5d287415a8915..d998504679ac5 100644 --- a/docs/admin/bootstrap-tokens.md +++ b/docs/admin/bootstrap-tokens.md @@ -93,8 +93,8 @@ stringData: expiration: 2017-03-10T03:22:11Z # Allowed usages. - usage-bootstrap-authentication: true - usage-bootstrap-signing: true + usage-bootstrap-authentication: "true" + usage-bootstrap-signing: "true" # Extra groups to authenticate the token as. Must start with "system:bootstrappers:" auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress From 0a3cf5ed826e35a892f008bf1f8c138d589135e6 Mon Sep 17 00:00:00 2001 From: AdamDang Date: Wed, 28 Feb 2018 03:11:46 +0800 Subject: [PATCH 043/117] fix a testscase->a test case (#7542) "a testscase" is not correct. --- test/examples_test.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/test/examples_test.go b/test/examples_test.go index 2bc6ba8084502..9075d9b7137bb 100644 --- a/test/examples_test.go +++ b/test/examples_test.go @@ -664,7 +664,7 @@ func TestExampleObjectSchemas(t *testing.T) { // 6) Look for #5 followed by a newline followed by ``` (end of the code block) // // This could probably be simplified, but is already too delicate. Before any -// real changes, we should have a testscase that just tests this regex. +// real changes, we should have a test case that just tests this regex. var sampleRegexp = regexp.MustCompile("(?ms)^```(?:(?Pyaml)\\w*\\n(?P.+?)|\\w*\\n(?P\\{.+?\\}))\\n^```") var subsetRegexp = regexp.MustCompile("(?ms)\\.{3}") From 01b2e275805059b4a3389e3d6cf98759a4bf41a4 Mon Sep 17 00:00:00 2001 From: WanLinghao Date: Wed, 28 Feb 2018 03:15:46 +0800 Subject: [PATCH 044/117] fix privileged description miss (#7515) modified: docs/concepts/policy/pod-security-policy.md --- docs/concepts/policy/pod-security-policy.md | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/docs/concepts/policy/pod-security-policy.md b/docs/concepts/policy/pod-security-policy.md index 24d7c7dd6e379..3bd22cf3d1db5 100644 --- a/docs/concepts/policy/pod-security-policy.md +++ b/docs/concepts/policy/pod-security-policy.md @@ -23,7 +23,7 @@ administrator to control the following: | Control Aspect | Field Names | | ----------------------------------------------------| ------------------------------------------- | -| Running of privileged containers | `privileged` | +| Running of privileged containers | [`privileged`](#privileged) | | Usage of the root namespaces | [`hostPID`, `hostIPC`](#host-namespaces) | | Usage of host networking and ports | [`hostNetwork`, `hostPorts`](#host-namespaces) | | Usage of volume types | [`volumes`](#volumes-and-file-systems) | @@ -354,6 +354,15 @@ several security mechanisms. ## Policy Reference +### Privileged + +**Privileged** - determines if any container in a pod can enable privileged mode. +By default a container is not allowed to access any devices on the host, but a +"privileged" container is given access to all devices on the host. This allows +the container nearly all the same access as processes running on the host. +This is useful for containers that want to use linux capabilities like +manipulating the network stack and accessing devices. + ### Host namespaces **HostPID** - Controls whether the pod containers can share the host process ID From 0e7df8d7412f4c710767b921341c00b9b529be5e Mon Sep 17 00:00:00 2001 From: Mike Wilson Date: Tue, 27 Feb 2018 14:16:46 -0500 Subject: [PATCH 045/117] Updating docs for upgrades (#7525) * Updating docs for upgrades * removing latin per https://kubernetes.io/docs/home/contribute/style-guide/#avoid-latin-phrases * More latin --- .../getting-started-guides/ubuntu/upgrades.md | 22 ++++++++----------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/docs/getting-started-guides/ubuntu/upgrades.md b/docs/getting-started-guides/ubuntu/upgrades.md index dae20fd2eb74d..267aa122b35f7 100644 --- a/docs/getting-started-guides/ubuntu/upgrades.md +++ b/docs/getting-started-guides/ubuntu/upgrades.md @@ -17,11 +17,11 @@ Refer to the [backup documentation](/docs/getting-started-guides/ubuntu/backups) {% endcapture %} {% capture steps %} -## Patch kubernetes upgrades eg 1.7.0 -> 1.7.1 +## Patch kubernetes upgrades for example 1.9.0 -> 1.9.1 Clusters are transparently upgraded to the latest Kubernetes patch release. -To be clear, a cluster deployed using the 1.7/stable channel -will transparently receive unattended upgrades for the 1.7.X Kubernetes +To be clear, a cluster deployed using the 1.9/stable channel +will transparently receive unattended upgrades for the 1.9.X Kubernetes releases. The upgrade causes no disruption to the operation of the cluster and requires no intervention from a cluster administrator. @@ -31,16 +31,13 @@ Once a patch release passes internal testing and is deemed safe for upgrade, it is packaged in snap format and pushed to the stable channel. -## Upgrading a minor Kubernetes release eg 1.7.1 -> 1.8.0 +## Upgrading a minor Kubernetes release for example 1.8.1 -> 1.9.0 The Kubernetes charms follow the Kubernetes releases. Please consult your support plan on the upgrade frequency. Important operational considerations and changes in behaviour will always be documented in the release notes. -You can use `juju status` to see if an upgrade is available. -There may be an upgrade available for kubernetes, etcd, or both. - ### Upgrade etcd Backing up etcd requires an export and snapshot, refer to the @@ -49,14 +46,13 @@ After the snapshot, upgrade the etcd service with: juju upgrade-charm etcd -This will handle upgrades between minor versions of etcd. Major upgrades from -etcd 2.x to 3.x are currently unsupported. Instead, data will be run in etcdv2 stores over the etcdv3 api. +This will handle upgrades between minor versions of etcd. Instructions on how to upgrade from 2.x to 3.x can be found [here](https://github.com/juju-solutions/bundle-canonical-kubernetes/wiki/Etcd-2.3-to-3.x-upgrade) in the juju-solutions wiki. ### Upgrade Kubernetes The Kubernetes Charms use snap channels to drive payloads. The channels are defined by `X.Y/channel` where `X.Y` is the `major.minor` release -of Kubernetes (e.g. 1.6) and `channel` is one of the four following channels: +of Kubernetes (for example 1.9) and `channel` is one of the four following channels: | Channel name | Description | | ------------------- | ------------ | @@ -66,7 +62,7 @@ of Kubernetes (e.g. 1.6) and `channel` is one of the four following channels: | edge | Nightly builds of that minor release of Kubernetes | If a release isn't available, the next highest channel is used. -For example, 1.6/beta will load `/candidate` or `/stable` depending on availability of release. +For example, 1.9/beta will load `/candidate` or `/stable` depending on availability of release. Development versions of Kubernetes are available in the edge channel for each minor release. There is no guarantee that edge snaps will work with the current charms. @@ -83,7 +79,7 @@ Once the latest charm is deployed, the channel for Kubernetes can be selected by juju config kubernetes-master channel=1.x/stable -Where `x` is the minor version of Kubernetes. For example, `1.6/stable`. See above for Channel definitions. +Where `x` is the minor version of Kubernetes. For example, `1.9/stable`. See above for Channel definitions. Once you've configured kubernetes-master with the appropriate channel, run the upgrade action on each master: juju run-action kubernetes-master/0 upgrade @@ -123,7 +119,7 @@ Tear down old workers with: juju upgrade-charm kubernetes-worker juju config kubernetes-worker channel=1.x/stable -Where `x` is the minor version of Kubernetes. For example, `1.6/stable`. +Where `x` is the minor version of Kubernetes. For example, `1.9/stable`. See above for Channel definitions. Once you've configured kubernetes-worker with the appropriate channel, run the upgrade action on each worker: From ab1398e94000eed181de466e1b1e66243997db78 Mon Sep 17 00:00:00 2001 From: Mike Wilson Date: Tue, 27 Feb 2018 14:17:47 -0500 Subject: [PATCH 046/117] Update logging documentation (#7526) Juju option has changed. --- docs/getting-started-guides/ubuntu/logging.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/getting-started-guides/ubuntu/logging.md b/docs/getting-started-guides/ubuntu/logging.md index e8ddb10c80d24..9c29a0ddda34a 100644 --- a/docs/getting-started-guides/ubuntu/logging.md +++ b/docs/getting-started-guides/ubuntu/logging.md @@ -26,10 +26,10 @@ Log verbosity in Juju is set at the model level. You can adjust it at any time: juju add-model k8s-development --config logging-config='=DEBUG;unit=DEBUG' ``` -and later +and later on your k8s-production model ``` -juju config-model k8s-production --config logging-config='=ERROR;unit=ERROR' +juju model-config -m k8s-production logging-config='=ERROR;unit=ERROR' ``` In addition, the jujud daemon is started in debug mode by default on all controllers. To remove that behavior edit ```/var/lib/juju/init/jujud-machine-0/exec-start.sh``` on the controller node and comment the ```--debug``` section. From 21926145fe92e1026511b5d47577a5c36af3ffa7 Mon Sep 17 00:00:00 2001 From: Michelle Au Date: Tue, 27 Feb 2018 11:21:45 -0800 Subject: [PATCH 047/117] Update docs with subpath limitation (#7533) --- docs/concepts/configuration/secret.md | 5 +++++ docs/concepts/storage/volumes.md | 16 ++++++++++++++++ .../configure-pod-configmap.md | 5 +++++ ...downward-api-volume-expose-pod-information.md | 5 +++++ 4 files changed, 31 insertions(+) diff --git a/docs/concepts/configuration/secret.md b/docs/concepts/configuration/secret.md index 044fa139403f9..96f64b6c02837 100644 --- a/docs/concepts/configuration/secret.md +++ b/docs/concepts/configuration/secret.md @@ -338,6 +338,11 @@ However, it is using its local ttl-based cache for getting the current value of As a result, the total delay from the moment when the secret is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period + ttl of secrets cache in kubelet. +**Note:** A container using a Secret as a +[subPath](/docs/concepts/storage/volumes#using-subpath) volume mount will not receive +Secret updates. +{: .note} + #### Using Secrets as Environment Variables To use a secret in an environment variable in a pod: diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index f697c04f9e411..3881dc8e54fec 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -211,6 +211,10 @@ its `log_level` entry are mounted into the Pod at path "`/etc/config/log_level`" Note that this path is derived from the volume's `mountPath` and the `path` keyed with `log_level`. +**Note:** A container using a ConfigMap as a [subPath](#using-subpath) volume mount will not +receive ConfigMap updates. +{: .note} + ### csi CSI stands for [Container Storage Interface](https://github.com/container-storage-interface/spec/blob/master/spec.md), @@ -248,6 +252,10 @@ A CSI persistent volume has the following fields for users to specify: A `downwardAPI` volume is used to make downward API data available to applications. It mounts a directory and writes the requested data in plain text files. +**Note:** A container using Downward API as a [subPath](#using-subpath) volume mount will not +receive Downward API updates. +{: .note} + See the [`downwardAPI` volume example](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) for more details. ### emptyDir @@ -687,6 +695,10 @@ parameters are nearly the same with two exceptions: volume source. However, as illustrated above, you can explicitly set the `mode` for each individual projection. +**Note:** A container using a projected volume source as a [subPath](#using-subpath) volume mount will not +receive updates for those volume sources. +{: .note} + ### portworxVolume A `portworxVolume` is an elastic block storage layer that runs hyperconverged with @@ -807,6 +819,10 @@ non-volatile storage. **Important:** You must create a secret in the Kubernetes API before you can use it. {: .caution} +**Note:** A container using a Secret as a [subPath](#using-subpath) volume mount will not +receive Secret updates. +{: .note} + Secrets are described in more detail [here](/docs/user-guide/secrets). ### storageOS diff --git a/docs/tasks/configure-pod-container/configure-pod-configmap.md b/docs/tasks/configure-pod-container/configure-pod-configmap.md index f1a6c6949ffbb..ed36a60a0a41a 100644 --- a/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -476,6 +476,11 @@ basis. The [Secrets](/docs/concepts/configuration/secret/#using-secrets-as-files When a ConfigMap already being consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period + ttl of ConfigMaps cache in kubelet. +**Note:** A container using a ConfigMap as a +[subPath](/docs/concepts/storage/volumes/#using-subpath) volume will not receive +ConfigMap updates. +{: .note} + {% endcapture %} {% capture discussion %} diff --git a/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md b/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md index 30e937e72dd66..90440cc7f2fdc 100644 --- a/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md +++ b/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information.md @@ -135,6 +135,11 @@ written to a new temporary directory, and the `..data` symlink is updated atomically using [rename(2)](http://man7.org/linux/man-pages/man2/rename.2.html). +**Note:** A container using Downward API as a +[subPath](/docs/concepts/storage/volumes/#using-subpath) volume mount will not +receive Downward API updates. +{: .note} + Exit the shell: ```shell From 112a6e59283eb98f3ccdd4b1f787d065f63e843a Mon Sep 17 00:00:00 2001 From: Kay Yan Date: Wed, 28 Feb 2018 03:24:45 +0800 Subject: [PATCH 048/117] fix-typo-in-daemonset (#7534) --- docs/concepts/workloads/controllers/daemonset.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/workloads/controllers/daemonset.md b/docs/concepts/workloads/controllers/daemonset.md index e2c00863b89a9..a20f0f8d39f23 100644 --- a/docs/concepts/workloads/controllers/daemonset.md +++ b/docs/concepts/workloads/controllers/daemonset.md @@ -124,7 +124,7 @@ due to hard-coded behavior of the NodeController rather than due to tolerations) - `node.kubernetes.io/disk-pressure` When the support to critical pods is enabled and the pods in a DaemonSet are -labelled as critical, the Daemon pods are created with an additional +labeled as critical, the Daemon pods are created with an additional `NoSchedule` toleration for the `node.kubernetes.io/out-of-disk` taint. Note that all above `NoSchedule` taints above are created only in version 1.8 or later if the alpha feature `TaintNodesByCondition` is enabled. From a90eb022ebfc8a37c4327e292c775e054a86301c Mon Sep 17 00:00:00 2001 From: Robert Morse Date: Tue, 27 Feb 2018 12:27:45 -0700 Subject: [PATCH 049/117] Getting Started Updates for SIG-Windows: take 2 (#7535) * Pull out build instructions * General clarifications and remove reference to merged kube-proxy PR * Updates to sample yaml files, and clarification on host OS version (2016/latest vs 1709 tags) * Reordering for clarity * Add tag for Server 1709 * Addressing feedback * Address feedback --- docs/getting-started-guides/windows/index.md | 138 ++++++++++--------- 1 file changed, 73 insertions(+), 65 deletions(-) diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md index 8db4a79706493..cc2c53b32465a 100644 --- a/docs/getting-started-guides/windows/index.md +++ b/docs/getting-started-guides/windows/index.md @@ -16,35 +16,10 @@ The Kubernetes control plane (API Server, Scheduler, Controller Manager, etc) co **Note:** Windows Server Containers on Kubernetes is a Beta feature in Kubernetes v1.9 {: .note} -## Build -We recommend using the release binaries that can be found at [https://github.com/kubernetes/kubernetes/releases](https://github.com/kubernetes/kubernetes/releases). Look for the Node Binaries section by visiting the binary downloads link. +## Get Windows Binaries +We recommend using the release binaries that can be found at [https://github.com/kubernetes/kubernetes/releases/latest](https://github.com/kubernetes/kubernetes/releases/latest). Under the CHANGELOG you can find the Node Binaries link for Windows-amd64, which will include kubeadm, kubectl, kubelet and kube-proxy. -If you wish to build the code yourself, please follow the next instructions: - -1. Install the pre-requisites on a Linux host: - - ``` - sudo apt-get install curl git build-essential docker.io conntrack - ``` -2. Run the following commands to build kubelet and kube-proxy: - - ```bash - K8SREPO="github.com/kubernetes/kubernetes" - go get -d $K8SREPO - # Note: the above command may spit out a message about - # "no Go files in...", but it can be safely ignored! - - cd $GOPATH/src/k8s.io/kubernetes - # Build the kubelet - KUBE_BUILD_PLATFORMS=windows/amd64 make WHAT=cmd/kubelet - - # Build the kube-proxy - KUBE_BUILD_PLATFORMS=windows/amd64 make WHAT=cmd/kube-proxy - - # You will find the output binaries under the folder _output/local/bin/windows/ -``` - -More detailed build instructions will be maintained and kept up to date [here](https://github.com/MicrosoftDocs/Virtualization-Documentation/blob/live/virtualization/windowscontainers/kubernetes/compiling-kubernetes-binaries.md). +If you wish to build the code yourself, please refer to detailed build instructions [here](https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/compiling-kubernetes-binaries). ## Prerequisites In Kubernetes version 1.9 or later, Windows Server Containers for Kubernetes are supported using the following: @@ -77,9 +52,7 @@ Windows supports the CNI network model and uses plugins to interface with the Wi #### Upstream L3 Routing Topology In this topology, networking is achieved using L3 routing with static IP routes configured in an upstream Top of Rack (ToR) switch/router. Each cluster node is connected to the management network with a host IP. Additionally, each node uses a local 'l2bridge' network with a pod CIDR assigned. All pods on a given worker node will be connected to the pod CIDR subnet ('l2bridge' network). In order to enable network communication between pods running on different nodes, the upstream router has static routes configured with pod CIDR prefix => Host IP. -Each Window Server node should have the following configuration: - -The following diagram illustrates the Windows Server networking setup for Kubernetes using Upstream L3 Routing Setup: +The following example diagram illustrates the Windows Server networking setup for Kubernetes using Upstream L3 Routing Setup: ![K8s Cluster using L3 Routing with ToR](UpstreamRouting.png) #### Host-Gateway Topology @@ -111,7 +84,7 @@ To run Windows Server Containers on Kubernetes, you'll need to set up both your 1. Windows Server container host running the required Windows Server and Docker versions. Follow the setup instructions outlined by this help topic: https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/quick-start-windows-server. -2. [Build](#Build) or download kubelet.exe, kube-proxy.exe, and kubectl.exe using instructions +2. [Get Windows Binaries](#get-windows-binaries) kubelet.exe, kube-proxy.exe, and kubectl.exe using instructions 3. Copy Node spec file (kube config) from Linux master node with X.509 keys 4. Create the HNS Network, ensure the correct CNI network config, and start kubelet.exe using this script [start-kubelet.ps1](https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/start-kubelet.ps1) 5. Start kube-proxy using this script [start-kubeproxy.ps1](https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/start-kubeproxy.ps1) @@ -120,7 +93,7 @@ To run Windows Server Containers on Kubernetes, you'll need to set up both your More detailed instructions can be found [here](https://github.com/MicrosoftDocs/Virtualization-Documentation/blob/live/virtualization/windowscontainers/kubernetes/getting-started-kubernetes-windows.md). **Windows CNI Config Example** -Today, Windows CNI plugin is based on wincni.exe code with the following example, configuration file. +Today, Windows CNI plugin is based on wincni.exe code with the following example, configuration file. This is based on the ToR example diagram shown above, specifying the configuration to apply to Windows node-1. Of special interest is Windows node-1 pod CIDR (10.10.187.64/26) and the associated gateway of cbr0 (10.10.187.66). The exception list is specifying the Service CIDR (11.0.0.0/8), Cluster CIDR (10.10.0.0/16), and Management (or Host) CIDR (10.127.132.128/25). Note: this file assumes that a user previous created 'l2bridge' host networks on each Windows node using `-HNSNetwork` cmdlets as shown in the `start-kubelet.ps1` and `start-kubeproxy.ps1` scripts linked above @@ -254,8 +227,23 @@ To start your cluster, you'll need to start both the Linux-based Kubernetes cont ## Starting the Linux-based Control Plane Use your preferred method to start Kubernetes cluster on Linux. Please note that Cluster CIDR might need to be updated. -## Scheduling Pods on Windows -Because your cluster has both Linux and Windows nodes, you must explicitly set the nodeSelector constraint to be able to schedule pods to Windows nodes. You must set nodeSelector with the label beta.kubernetes.io/os to the value windows; see the following example: +## Support for kubeadm join + +If your cluster has been created by [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/), +and your networking is setup correctly using one of the methods listed above (networking is setup outside of kubeadm), you can use kubeadm to add a Windows node to your cluster. At a high level, you first have to initialize the master with kubeadm (Linux), then set up the CNI based networking (outside of kubeadm), and finally start joining Windows or Linux worker nodes to the cluster. For additional documentation and reference material, visit the kubeadm link above. + +The kubeadm binary can be found at [Kubernetes Releases](https://github.com/kubernetes/kubernetes/releases), inside the node binaries archive. Adding a Windows node is not any different than adding a Linux node: + +`kubeadm.exe join --token : --discovery-token-ca-cert-hash sha256:` + +See [joining-your-nodes](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#44-joining-your-nodes) for more details. + +## Supported Features + +The examples listed below assume running Windows nodes on Windows Server 1709. If you are running Windows Server 2016, the examples will need the image updated to specify `image: microsoft/windowsservercore:ltsc2016`. This is due to the requirement for container images to match the host operating system version when using process isolation. Not specifying a tag will implicitly use the `:latest` tag which can lead to surprising behaviors. Please consult with [https://hub.docker.com/r/microsoft/windowsservercore/](https://hub.docker.com/r/microsoft/windowsservercore/) for additional information on Windows Server Core image tagging. + +### Scheduling Pods on Windows +Because your cluster has both Linux and Windows nodes, you must explicitly set the `nodeSelector` constraint to be able to schedule pods to Windows nodes. You must set nodeSelector with the label `beta.kubernetes.io/os` to the value `windows`; see the following example: ```yaml { @@ -271,7 +259,7 @@ Because your cluster has both Linux and Windows nodes, you must explicitly set t "containers": [ { "name": "iis", - "image": "microsoft/iis", + "image": "microsoft/iis:windowsservercore-1709", "ports": [ { "containerPort": 80 @@ -285,18 +273,7 @@ Because your cluster has both Linux and Windows nodes, you must explicitly set t } } ``` -## Support for kubeadm join - -If your cluster has been created by [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/), -and your networking is setup correctly using one of the methods listed above (networking is setup outside of kubeadm), you can use kubeadm to add a Windows node to your cluster. At a high level, you first have to initialize the master with kubeadm (Linux), then set up the CNI based networking (outside of kubeadm), and finally start joining Windows or Linux worker nodes to the cluster. For additional documentation and reference material, visit the kubeadm link above. - -The kubeadm binary can be found at [Kubernetes Releases](https://github.com/kubernetes/kubernetes/releases), inside the node binaries archive. Adding a Windows node is not any different than adding a Linux node: - -`kubeadm.exe join --token : --discovery-token-ca-cert-hash sha256:` - -See [joining-your-nodes](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#44-joining-your-nodes) for more details. - -## Supported Features +**Note:** this example assumes you are running on Windows Server 1709, so uses the image tag to support that. If you are on a different version, you will need to update the tag. For example, if on Windows Server 2016, update to use `"image": "microsoft/iis"` which will default to that OS version. ### Secrets and ConfigMaps Secrets and ConfigMaps can be utilized in Windows Server Containers, but must be used as environment variables. See limitations section below for additional details. @@ -319,11 +296,11 @@ Secrets and ConfigMaps can be utilized in Windows Server Containers, but must be apiVersion: v1 kind: Pod metadata: - name: mypod-secret + name: my-secret-pod spec: containers: - - name: mypod-secret - image: redis:3.0-nanoserver + - name: my-secret-pod + image: microsoft/windowsservercore:1709 env: - name: USERNAME valueFrom: @@ -355,11 +332,11 @@ data: apiVersion: v1 kind: Pod metadata: - name: configmap-pod + name: my-configmap-pod spec: containers: - - name: configmap-redis - image: redis:3.0-nanoserver + - name: my-configmap-pod + image: microsoft/windowsservercore:1709 env: - name: EXAMPLE_PROPERTY_1 valueFrom: @@ -387,19 +364,19 @@ Persistent Volume Claims are supported for supported volume types. apiVersion: v1 kind: Pod metadata: - name: hostpath-volume-pod + name: my-hostpath-volume-pod spec: containers: - - name: hostpath-redis - image: redis:3.0-nanoserver + - name: my-hostpath-volume-pod + image: microsoft/windowsservercore:1709 volumeMounts: - - name: blah + - name: foo mountPath: "C:\\etc\\foo" readOnly: true nodeSelector: beta.kubernetes.io/os: windows volumes: - - name: blah + - name: foo hostPath: path: "C:\\etc\\foo" ``` @@ -410,11 +387,11 @@ Persistent Volume Claims are supported for supported volume types. apiVersion: v1 kind: Pod metadata: - name: empty-dir-pod + name: my-empty-dir-pod spec: containers: - - image: redis:3.0-nanoserver - name: empty-dir-redis + - image: microsoft/windowsservercore:1709 + name: my-empty-dir-pod volumeMounts: - mountPath: /cache name: cache-volume @@ -428,7 +405,31 @@ Persistent Volume Claims are supported for supported volume types. nodeSelector: beta.kubernetes.io/os: windows ``` - + +### DaemonSets + +DaemonSets are supported + +```yaml +apiVersion: extensions/v1beta1 +kind: DaemonSet +metadata: + name: my-DaemonSet + labels: + app: foo +spec: + template: + metadata: + labels: + app: foo + spec: + containers: + - name: foo + image: microsoft/windowsservercore:1709 + nodeSelector: + beta.kubernetes.io/os: windows +``` + ### Metrics Windows Stats use a hybrid model: pod and container level stats come from CRI (via dockershim), while node level stats come from the "winstats" package that exports cadvisor like data structures using windows specific perf counters from the node. @@ -437,9 +438,16 @@ Windows Stats use a hybrid model: pod and container level stats come from CRI (v Some of these limitations will be addressed by the community in future releases of Kubernetes - Shared network namespace (compartment) with multiple Windows Server containers (shared kernel) per pod is only supported on Windows Server 1709 or later - Using Secrets and ConfigMaps as volume mounts is not supported +- Mount propagation is not supported on Windows - The StatefulSet functionality for stateful applications is not supported - Horizontal Pod Autoscaling for Windows Server Container pods has not been verified to work end-to-end -- Hyper-V Containers are not supported +- Hyper-V isolated containers are not supported. +- Windows container OS must match the Host OS. If it does not, the pod will get stuck in a crash loop. +- Under the networking models of L3 or Host GW, Kubernetes Services are inaccessible to Windows nodes due to a Windows issue. This is not an issue if using OVN/OVS for networking. +- Windows kubelet.exe may fail to start when running on Windows Server under VMWare Fusion [issue 57110](https://github.com/kubernetes/kubernetes/pull/57124) +- Flannel and Weavenet are not yet supported +## Next steps and resources -> As of this writing, the Kube-proxy binary requires a pending Kubernetes [pull request](https://github.com/kubernetes/kubernetes/pull/56529) to work properly. You may need to [build](#build) the binaries manually to work around this. +- Support for Windows is in Beta as of v1.9 and your feedback is welcome. For information on getting involved, please head to [SIG-Windows](https://github.com/kubernetes/community/blob/master/sig-windows/README.md) +- Troubleshooting and Common Problems: [Link](https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/common-problems) From e8a60186a7de80d78d22cd5d22237940ee1c343a Mon Sep 17 00:00:00 2001 From: AdamDang Date: Wed, 28 Feb 2018 03:28:45 +0800 Subject: [PATCH 050/117] Fix a container->a Container (#7536) --- docs/concepts/containers/container-environment-variables.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/containers/container-environment-variables.md b/docs/concepts/containers/container-environment-variables.md index 9ed5eb7775cae..0637e72f6f52f 100644 --- a/docs/concepts/containers/container-environment-variables.md +++ b/docs/concepts/containers/container-environment-variables.md @@ -41,7 +41,7 @@ as are any environment variables specified statically in the Docker image. A list of all services that were running when a Container was created is available to that Container as environment variables. Those environment variables match the syntax of Docker links. -For a service named *foo* that maps to a container named *bar*, +For a service named *foo* that maps to a Container named *bar*, the following variables are defined: ```shell From d79f4d82cb69504591ae8e49ad45cfbc7a4b2faf Mon Sep 17 00:00:00 2001 From: siliangxifeng1988 <36875607+siliangxifeng1988@users.noreply.github.com> Date: Wed, 28 Feb 2018 03:29:45 +0800 Subject: [PATCH 051/117] Update statefulset.md (#7537) --- docs/concepts/workloads/controllers/statefulset.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/workloads/controllers/statefulset.md b/docs/concepts/workloads/controllers/statefulset.md index fe97e79f8e33e..a19691ce59069 100644 --- a/docs/concepts/workloads/controllers/statefulset.md +++ b/docs/concepts/workloads/controllers/statefulset.md @@ -99,7 +99,7 @@ spec: name: www spec: accessModes: [ "ReadWriteOnce" ] - storageClassName: my-storage-class + storageClassName: "my-storage-class" resources: requests: storage: 1Gi From d807e5303da865b574a86149d1434353409f90cd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ronny=20L=C3=B3pez?= Date: Tue, 27 Feb 2018 20:31:44 +0100 Subject: [PATCH 052/117] Update service.md (#7539) --- docs/concepts/services-networking/service.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/services-networking/service.md b/docs/concepts/services-networking/service.md index 236d43e8d9fba..c4ceadab4115a 100644 --- a/docs/concepts/services-networking/service.md +++ b/docs/concepts/services-networking/service.md @@ -318,7 +318,7 @@ variables will not be populated. DNS does not have this restriction. ### DNS An optional (though strongly recommended) [cluster -add-on](http://releases.k8s.io/{{page.githubbranch}}/cluster/addons/README.md) is a DNS server. The +add-on](/docs/concepts/cluster-administration/addons/) is a DNS server. The DNS server watches the Kubernetes API for new `Services` and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all `Pods` should be able to do name resolution of `Services` automatically. From a6a480bba2d5efc06e7e2068e314edd1e50b47b4 Mon Sep 17 00:00:00 2001 From: AdamDang Date: Wed, 28 Feb 2018 03:38:47 +0800 Subject: [PATCH 053/117] fix KUbernetes->Kubernetes (#7544) "KUbernetes" is not standard. --- .../overview/working-with-objects/kubernetes-objects.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md index dc7a1701dd0fe..33a2507d42625 100644 --- a/cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/cn/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -52,12 +52,12 @@ Kubernetes 系统读取 Deployment 规约,并启动我们所期望的该应用 ### 描述 Kubernetes 对象 -当创建 KUbernetes 对象时,必须提供对象的规约,用来描述该对象的期望状态,以及关于对象的一些基本信息(例如名称)。 -当使用 KUbernetes API 创建对象时(或者直接创建,或者基于`kubectl`),API 请求必须在请求体中包含 JSON 格式的信息。 +当创建 Kubernetes 对象时,必须提供对象的规约,用来描述该对象的期望状态,以及关于对象的一些基本信息(例如名称)。 +当使用 Kubernetes API 创建对象时(或者直接创建,或者基于`kubectl`),API 请求必须在请求体中包含 JSON 格式的信息。 **大多数情况下,需要在 .yaml 文件中为 `kubectl` 提供这些信息**。 `kubectl` 在发起 API 请求时,将这些信息转换成 JSON 格式。 -这里有一个 `.yaml` 示例文件,展示了 KUbernetes Deployment 的必需字段和对象规约: +这里有一个 `.yaml` 示例文件,展示了 Kubernetes Deployment 的必需字段和对象规约: {% include code.html language="yaml" file="nginx-deployment.yaml" ghlink="/docs/concepts/overview/working-with-objects/nginx-deployment.yaml" %} @@ -78,7 +78,7 @@ deployment "nginx-deployment" created ### 必需字段 -在想要创建的 KUbernetes 对象对应的 `.yaml` 文件中,需要配置如下的字段: +在想要创建的 Kubernetes 对象对应的 `.yaml` 文件中,需要配置如下的字段: * `apiVersion` - 创建该对象所使用的 Kubernetes API 的版本 * `kind` - 想要创建的对象的类型 From 8aaa23971e187c273e709da2ffd6b9f701931192 Mon Sep 17 00:00:00 2001 From: Mike Wilson Date: Tue, 27 Feb 2018 14:39:47 -0500 Subject: [PATCH 054/117] Update glossary page (#7547) Changed some unit and charm names to match what we actually ship now. --- docs/getting-started-guides/ubuntu/glossary.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/getting-started-guides/ubuntu/glossary.md b/docs/getting-started-guides/ubuntu/glossary.md index c48aa821123d5..30cbd8e142cb1 100644 --- a/docs/getting-started-guides/ubuntu/glossary.md +++ b/docs/getting-started-guides/ubuntu/glossary.md @@ -13,9 +13,9 @@ This page explains some of the terminology used in deploying Kubernetes with Juj **model** - A collection of charms and their relationships that define a deployment. This includes machines and units. A controller can host multiple models. It is recommended to separate Kubernetes clusters into individual models for management and isolation reasons. -**charm** - The definition of a service, including its metadata, dependencies with other services, required packages, and application management logic. It contains all the operational knowledge of deploying a Kubernetes cluster. Included charm examples are `kubernetes-core`, `easy-rsa`, `kibana`, and `etcd`. +**charm** - The definition of a service, including its metadata, dependencies with other services, required packages, and application management logic. It contains all the operational knowledge of deploying a Kubernetes cluster. Included charm examples are `kubernetes-core`, `easyrsa`, `flannel`, and `etcd`. -**unit** - A given instance of a service. These may or may not use up a whole machine, and may be colocated on the same machine. So for example you might have a `kubernetes-worker`, and `filebeat`, and `topbeat` units running on a single machine, but they are three distinct units of different services. +**unit** - A given instance of a service. These may or may not use up a whole machine, and may be colocated on the same machine. So for example you might have a `kubernetes-worker`, and `etcd`, and `easyrsa` units running on a single machine, but they are three distinct units of different services. **machine** - A physical node, these can either be bare metal nodes, or virtual machines provided by a cloud. {% endcapture %} From 36d6fb25a0cd9ff32eaad669fef666a896d02053 Mon Sep 17 00:00:00 2001 From: Matt Braymer-Hayes Date: Tue, 27 Feb 2018 11:59:46 -0800 Subject: [PATCH 055/117] Update pod.md (#7497) Remove statements about the future, fix formatting. --- docs/concepts/workloads/pods/pod.md | 31 ++++++++++++++--------------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/docs/concepts/workloads/pods/pod.md b/docs/concepts/workloads/pods/pod.md index bb4e05ca65dd2..988e74a007ad5 100644 --- a/docs/concepts/workloads/pods/pod.md +++ b/docs/concepts/workloads/pods/pod.md @@ -13,7 +13,7 @@ managed in Kubernetes. ## What is a Pod? A _pod_ (as in a pod of whales or pea pod) is a group of one or more containers -(such as Docker containers), with shared storage/network, and a specification +(such as Docker containers), with shared storage/network, and a specification for how to run the containers. A pod's contents are always co-located and co-scheduled, and run in a shared context. A pod models an application-specific "logical host" - it contains one or more application @@ -42,7 +42,7 @@ filesystem. In terms of [Docker](https://www.docker.com/) constructs, a pod is modelled as a group of Docker containers with shared namespaces and shared -[volumes](/docs/concepts/storage/volumes/). +[volumes](/docs/concepts/storage/volumes/). Like individual application containers, pods are considered to be relatively ephemeral (rather than durable) entities. As discussed in [life of a @@ -52,8 +52,7 @@ policy) or deletion. If a node dies, the pods scheduled to that node are scheduled for deletion, after a timeout period. A given pod (as defined by a UID) is not "rescheduled" to a new node; instead, it can be replaced by an identical pod, with even the same name if desired, but with a new UID (see [replication -controller](/docs/concepts/workloads/controllers/replicationcontroller/) for more details). (In the future, a -higher-level API may support pod migration.) +controller](/docs/concepts/workloads/controllers/replicationcontroller/) for more details). When something is said to have the same lifetime as a pod, such as a volume, that means that it exists as long as that pod (with that UID) exists. If that @@ -121,12 +120,12 @@ _Why not just run multiple programs in a single (Docker) container?_ infrastructure enables the infrastructure to provide services to those containers, such as process management and resource monitoring. This facilitates a number of conveniences for users. -2. Decoupling software dependencies. The individual containers may be +1. Decoupling software dependencies. The individual containers may be versioned, rebuilt and redeployed independently. Kubernetes may even support live updates of individual containers someday. -3. Ease of use. Users don't need to run their own process managers, worry about +1. Ease of use. Users don't need to run their own process managers, worry about signal and exit-code propagation, etc. -4. Efficiency. Because the infrastructure takes on more responsibility, +1. Efficiency. Because the infrastructure takes on more responsibility, containers can be lighter weight. _Why not support affinity-based co-scheduling of containers?_ @@ -156,7 +155,7 @@ Pod is exposed as a primitive in order to facilitate: * decoupling of pod lifetime from controller lifetime, such as for bootstrapping * decoupling of controllers and services — the endpoint controller just watches pods * clean composition of Kubelet-level functionality with cluster-level functionality — Kubelet is effectively the "pod controller" -* high-availability applications, which will expect pods to be replaced in advance of their termination and certainly in advance of deletion, such as in the case of planned evictions, image prefetching, or live pod migration [#3949](http://issue.k8s.io/3949) +* high-availability applications, which will expect pods to be replaced in advance of their termination and certainly in advance of deletion, such as in the case of planned evictions or image prefetching. ## Termination of Pods @@ -165,14 +164,14 @@ Because pods represent running processes on nodes in the cluster, it is importan An example flow: 1. User sends command to delete Pod, with default grace period (30s) -2. The Pod in the API server is updated with the time beyond which the Pod is considered "dead" along with the grace period. -3. Pod shows up as "Terminating" when listed in client commands -4. (simultaneous with 3) When the Kubelet sees that a Pod has been marked as terminating because the time in 2 has been set, it begins the pod shutdown process. - 1. If the pod has defined a [preStop hook](/docs/concepts/containers/container-lifecycle-hooks/#hook-details), it is invoked inside of the pod. If the `preStop` hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) extended grace period. - 2. The processes in the Pod are sent the TERM signal. -5. (simultaneous with 3), Pod is removed from endpoints list for service, and are no longer considered part of the set of running pods for replication controllers. Pods that shutdown slowly can continue to serve traffic as load balancers (like the service proxy) remove them from their rotations. -6. When the grace period expires, any processes still running in the Pod are killed with SIGKILL. -7. The Kubelet will finish deleting the Pod on the API server by setting grace period 0 (immediate deletion). The Pod disappears from the API and is no longer visible from the client. +1. The Pod in the API server is updated with the time beyond which the Pod is considered "dead" along with the grace period. +1. Pod shows up as "Terminating" when listed in client commands +1. (simultaneous with 3) When the Kubelet sees that a Pod has been marked as terminating because the time in 2 has been set, it begins the pod shutdown process. + 1. If the pod has defined a [preStop hook](/docs/concepts/containers/container-lifecycle-hooks/#hook-details), it is invoked inside of the pod. If the `preStop` hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) extended grace period. + 1. The processes in the Pod are sent the TERM signal. +1. (simultaneous with 3) Pod is removed from endpoints list for service, and are no longer considered part of the set of running pods for replication controllers. Pods that shutdown slowly can continue to serve traffic as load balancers (like the service proxy) remove them from their rotations. +1. When the grace period expires, any processes still running in the Pod are killed with SIGKILL. +1. The Kubelet will finish deleting the Pod on the API server by setting grace period 0 (immediate deletion). The Pod disappears from the API and is no longer visible from the client. By default, all deletes are graceful within 30 seconds. The `kubectl delete` command supports the `--grace-period=` option which allows a user to override the default and specify their own value. The value `0` [force deletes](/docs/concepts/workloads/pods/pod/#force-deletion-of-pods) the pod. In kubectl version >= 1.5, you must specify an additional flag `--force` along with `--grace-period=0` in order to perform force deletions. From 22b73015977d84fb7744d30e7c5c44ff97ccf506 Mon Sep 17 00:00:00 2001 From: Ivange Larry Date: Tue, 27 Feb 2018 21:00:45 +0100 Subject: [PATCH 056/117] Add glossary entry for Cloud Controller Manager (#6952) * Add glossary entry for Cloud Controller Manager * Update cloud-controller-manager.yml fixing tag in glossary --- _data/glossary/cloud-controller-manager.yml | 13 +++++++++++++ 1 file changed, 13 insertions(+) create mode 100644 _data/glossary/cloud-controller-manager.yml diff --git a/_data/glossary/cloud-controller-manager.yml b/_data/glossary/cloud-controller-manager.yml new file mode 100644 index 0000000000000..dd819c42147d1 --- /dev/null +++ b/_data/glossary/cloud-controller-manager.yml @@ -0,0 +1,13 @@ +id: cloud-controller-manager +name: Cloud Controller Manager +full-link: https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/ +tags: +- core-object +- architecture +- operation +short-description: > + Cloud Controller Manager is an alpha feature in 1.8. In upcoming releases it will be the preferred way to integrate Kubernetes with any cloud. +long-description: > + Kubernetes v1.6 contains a new binary called cloud-controller-manager. cloud-controller-manager is a daemon that embeds cloud-specific control loops. + These cloud-specific control loops were originally in the kube-controller-manager. Since cloud providers develop and release at a different pace compared to the Kubernetes + project, abstracting the provider-specific code to the cloud-controller-manager binary allows cloud vendors to evolve independently from the core Kubernetes code. From 5d9c2d35bead6ee1177275b901dc12f837213105 Mon Sep 17 00:00:00 2001 From: Mike Wilson Date: Tue, 27 Feb 2018 22:43:46 -0500 Subject: [PATCH 057/117] Updating privileged container information (#7548) * Updating privileged container information We now use privileged containers by default on GPU-enabled nodes. * Changes for consistency Replaced worker with node. --- .../ubuntu/operational-considerations.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/getting-started-guides/ubuntu/operational-considerations.md b/docs/getting-started-guides/ubuntu/operational-considerations.md index 9c4a916a3a0db..b3f1fc758d4db 100644 --- a/docs/getting-started-guides/ubuntu/operational-considerations.md +++ b/docs/getting-started-guides/ubuntu/operational-considerations.md @@ -115,8 +115,8 @@ juju switch default ### Running privileged containers -By default, juju-deployed clusters do not support running privileged containers. -If you need them, you have to enable the ```allow-privileged``` config on both +By default, juju-deployed clusters only allow running privileged containers on nodes with GPUs. +If you need privileged containers on other nodes, you have to enable the ```allow-privileged``` config on both kubernetes-master and kubernetes-worker: ``` From 4bbc2c77c6d80a975c217683c7aac8bee1f3bec4 Mon Sep 17 00:00:00 2001 From: Mike Wilson Date: Tue, 27 Feb 2018 23:08:46 -0500 Subject: [PATCH 058/117] Updating troubleshooting (#7546) * Updating troubleshooting Changed debug method over to CDK Field Agent and replaced old logging suggestion with the logging page. * Rewording to hopefully clarify Please let me know if this still isn't clear. It's a deceptively dense word soup and I wonder if I need to just expand on it some to remove assumptions. --- .../ubuntu/troubleshooting.md | 38 ++----------------- 1 file changed, 4 insertions(+), 34 deletions(-) diff --git a/docs/getting-started-guides/ubuntu/troubleshooting.md b/docs/getting-started-guides/ubuntu/troubleshooting.md index fdcc2ad8fd40b..134b6cd67263f 100644 --- a/docs/getting-started-guides/ubuntu/troubleshooting.md +++ b/docs/getting-started-guides/ubuntu/troubleshooting.md @@ -68,41 +68,11 @@ This will automatically ssh you to the easyrsa unit. ## Collecting debug information -Sometimes it is useful to collect all the information from a node to share with a developer so problems can be identifying. This section will deal on how to use the debug action to collect this information. The debug action is only supported on `kubernetes-worker` nodes. +Sometimes it is useful to collect all the information from a cluster to share with a developer to identify problems. This is best accomplished with [CDK Field Agent](https://github.com/juju-solutions/cdk-field-agent). - juju run-action kubernetes-worker/0 debug +Download and execute the collect.py script from [CDK Field Agent](https://github.com/juju-solutions/cdk-field-agent) on a box that has a Juju client configured with the current controller and model pointing at the CDK deployment of interest. -Which returns: - - -``` -Action queued with id: 4b26e339-7366-4dc7-80ed-255ac0377020` -``` - -This produces a .tar.gz file which you can retrieve: - - juju show-action-output 4b26e339-7366-4dc7-80ed-255ac0377020 - -This will give you the path for the debug results: - -``` -results: - command: juju scp debug-test/0:/home/ubuntu/debug-20161110151539.tar.gz . - path: /home/ubuntu/debug-20161110151539.tar.gz -status: completed -timing: - completed: 2016-11-10 15:15:41 +0000 UTC - enqueued: 2016-11-10 15:15:38 +0000 UTC - started: 2016-11-10 15:15:40 +0000 UTC -``` - -You can now copy the results to your local machine: - - juju scp kubernetes-worker/0:/home/ubuntu/debug-20161110151539.tar.gz . - -The archive includes basic information such as systemctl status, Juju logs, -charm unit data, etc. Additional application-specific information may be -included as well. +Running the script will generate a tarball of system information and includes basic information such as systemctl status, Juju logs, charm unit data, etc. Additional application-specific information may be included as well. ## Common Problems @@ -204,7 +174,7 @@ This is caused by the API load balancer not forwarding ports in the context of t ## Logging and monitoring -By default there is no log aggregation of the Kubernetes nodes, each node logs locally. It is recommended to deploy the Elastic Stack for log aggregation if you desire centralized logging. +By default there is no log aggregation of the Kubernetes nodes, each node logs locally. Please read over the [logging](https://kubernetes.io/docs/getting-started-guides/ubuntu/logging/) page for more information. {% endcapture %} {% include templates/task.md %} From 23342f23d9b0e360b2bc83f3fe04c23d9e726cd4 Mon Sep 17 00:00:00 2001 From: bryangunn <30510064+bryangunn@users.noreply.github.com> Date: Tue, 27 Feb 2018 23:14:45 -0500 Subject: [PATCH 059/117] Update namespaces-walkthrough.md (#7551) Fixed a couple typos. --- docs/tasks/administer-cluster/namespaces-walkthrough.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tasks/administer-cluster/namespaces-walkthrough.md b/docs/tasks/administer-cluster/namespaces-walkthrough.md index 579c2f490625f..ef8de898b8250 100644 --- a/docs/tasks/administer-cluster/namespaces-walkthrough.md +++ b/docs/tasks/administer-cluster/namespaces-walkthrough.md @@ -28,7 +28,7 @@ This example assumes the following: By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, Services, and Deployments used by the cluster. -Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following: +Assuming you have a fresh cluster, you can inspect the available namespaces by doing the following: ```shell $ kubectl get namespaces From b99d1b13947c169a65a41ade946d43f9a5264512 Mon Sep 17 00:00:00 2001 From: Xiaodong Zhang Date: Wed, 28 Feb 2018 14:35:46 +0800 Subject: [PATCH 060/117] Bump up deployment version in tasks/access-application-cluster folder (#7324) --- docs/tasks/access-application-cluster/frontend.yaml | 7 ++++++- docs/tasks/access-application-cluster/hello.yaml | 7 ++++++- 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/docs/tasks/access-application-cluster/frontend.yaml b/docs/tasks/access-application-cluster/frontend.yaml index 63631c0d05ede..9f5b6b757fe8c 100644 --- a/docs/tasks/access-application-cluster/frontend.yaml +++ b/docs/tasks/access-application-cluster/frontend.yaml @@ -12,11 +12,16 @@ spec: targetPort: 80 type: LoadBalancer --- -apiVersion: apps/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: + selector: + matchLabels: + app: hello + tier: frontend + track: stable replicas: 1 template: metadata: diff --git a/docs/tasks/access-application-cluster/hello.yaml b/docs/tasks/access-application-cluster/hello.yaml index 61e4ea4a17612..85dff18ee1d80 100644 --- a/docs/tasks/access-application-cluster/hello.yaml +++ b/docs/tasks/access-application-cluster/hello.yaml @@ -1,8 +1,13 @@ -apiVersion: apps/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: name: hello spec: + selector: + matchLabels: + app: hello + tier: backend + track: stable replicas: 7 template: metadata: From ac9acff05a292fe45e4970ed7e189c326b4677bf Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ronny=20L=C3=B3pez?= Date: Wed, 28 Feb 2018 15:04:46 +0100 Subject: [PATCH 061/117] Fix broken link to image (#7554) --- docs/setup/independent/install-kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/setup/independent/install-kubeadm.md b/docs/setup/independent/install-kubeadm.md index 8d94bf1dfee91..08920178db72e 100644 --- a/docs/setup/independent/install-kubeadm.md +++ b/docs/setup/independent/install-kubeadm.md @@ -4,7 +4,7 @@ title: Installing kubeadm {% capture overview %} -This page shows how to install the `kubeadm` toolbox. +This page shows how to install the `kubeadm` toolbox. For information how to create a cluster with kubeadm once you have performed this installation process, see the [Using kubeadm to Create a Cluster](/docs/setup/independent/create-cluster-kubeadm/) page. From d80e55506adc9e66e584f67e91c64bc3d7880e97 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ronny=20L=C3=B3pez?= Date: Wed, 28 Feb 2018 15:07:46 +0100 Subject: [PATCH 062/117] Fix link to certified kubernetes badge (#7555) --- docs/reference/setup-tools/kubeadm/kubeadm.md | 2 +- docs/setup/independent/create-cluster-kubeadm.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/reference/setup-tools/kubeadm/kubeadm.md b/docs/reference/setup-tools/kubeadm/kubeadm.md index 536301d9069d2..d45b2cc0979ec 100644 --- a/docs/reference/setup-tools/kubeadm/kubeadm.md +++ b/docs/reference/setup-tools/kubeadm/kubeadm.md @@ -5,7 +5,7 @@ approvers: - jbeda title: Overview of kubeadm --- -Kubeadm is a tool built to provide `kubeadm init` and `kubeadm join` as best-practice “fast paths” for creating Kubernetes clusters. +Kubeadm is a tool built to provide `kubeadm init` and `kubeadm join` as best-practice “fast paths” for creating Kubernetes clusters. kubeadm performs the actions necessary to get a minimum viable cluster up and running. By design, it cares only about bootstrapping, not about provisioning machines. Likewise, installing various nice-to-have addons, like the Kubernetes Dashboard, monitoring solutions, and cloud-specific addons, is not in scope. diff --git a/docs/setup/independent/create-cluster-kubeadm.md b/docs/setup/independent/create-cluster-kubeadm.md index 342aeabcdd774..5cecf8df4e2d3 100644 --- a/docs/setup/independent/create-cluster-kubeadm.md +++ b/docs/setup/independent/create-cluster-kubeadm.md @@ -9,7 +9,7 @@ title: Using kubeadm to Create a Cluster {% capture overview %} -**kubeadm** is a toolkit that helps you bootstrap a best-practice Kubernetes +**kubeadm** is a toolkit that helps you bootstrap a best-practice Kubernetes cluster in an easy, reasonably secure and extensible way. It also supports managing [Bootstrap Tokens](/docs/admin/bootstrap-tokens/) for you and upgrading/downgrading clusters. From def491a76c99537d686817bcd5823618b8c0870b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Daniel=20Ko=C5=84czyk?= Date: Wed, 28 Feb 2018 22:02:47 +0000 Subject: [PATCH 063/117] Add missing NodePort in the example output (#7565) --- .../service-access-application-cluster.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/tasks/access-application-cluster/service-access-application-cluster.md b/docs/tasks/access-application-cluster/service-access-application-cluster.md index 212e1be9e6826..fee013ba173d3 100644 --- a/docs/tasks/access-application-cluster/service-access-application-cluster.md +++ b/docs/tasks/access-application-cluster/service-access-application-cluster.md @@ -71,6 +71,8 @@ provides load balancing for an application that has two running instances. Type: NodePort IP: 10.32.0.16 Port: 8080/TCP + TargetPort: 8080/TCP + NodePort: 31496/TCP Endpoints: 10.200.1.4:8080,10.200.2.5:8080 Session Affinity: None Events: From 0c2c9d269a8911d62ef77e9fbbfe5e1211e92a88 Mon Sep 17 00:00:00 2001 From: Matt Janssen Date: Wed, 28 Feb 2018 17:07:56 -0600 Subject: [PATCH 064/117] Add missing closing paren. (#7563) Currently reads: > Pods (such as RollingUpdate, is almost but should read: > Pods (such as RollingUpdate), is almost --- docs/concepts/configuration/overview.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/configuration/overview.md b/docs/concepts/configuration/overview.md index 3322e0b356ceb..3381f9e7eef6f 100644 --- a/docs/concepts/configuration/overview.md +++ b/docs/concepts/configuration/overview.md @@ -32,7 +32,7 @@ This is a living document. If you think of something that is not on this list bu - Don't use naked Pods (that is, Pods not bound to a [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) or [Deployment](/docs/concepts/workloads/controllers/deployment/)) if you can avoid it. Naked Pods will not be rescheduled in the event of a node failure. - A Deployment, which both creates a ReplicaSet to ensure that the desired number of Pods is always available, and specifies a strategy to replace Pods (such as [RollingUpdate](/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment), is almost always preferable to creating Pods directly, except for some explicit [`restartPolicy: Never`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) scenarios. A [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) may also be appropriate. + A Deployment, which both creates a ReplicaSet to ensure that the desired number of Pods is always available, and specifies a strategy to replace Pods (such as [RollingUpdate](/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment)), is almost always preferable to creating Pods directly, except for some explicit [`restartPolicy: Never`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) scenarios. A [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) may also be appropriate. ## Services From 9c885f6f44a70462bd3ecea2fb75818a99ef4ae7 Mon Sep 17 00:00:00 2001 From: Josh Horwitz Date: Wed, 28 Feb 2018 23:53:55 -0500 Subject: [PATCH 065/117] Rename network-plugin-dir kubelet flag to cni-bin-dir (#7224) --- docs/concepts/cluster-administration/network-plugins.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/concepts/cluster-administration/network-plugins.md b/docs/concepts/cluster-administration/network-plugins.md index 67c3605ae99f4..852f32740e3ed 100644 --- a/docs/concepts/cluster-administration/network-plugins.md +++ b/docs/concepts/cluster-administration/network-plugins.md @@ -20,8 +20,8 @@ Network plugins in Kubernetes come in a few flavors: The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it found, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as rkt manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins: -* `network-plugin-dir`: Kubelet probes this directory for plugins on startup -* `network-plugin`: The network plugin to use from `network-plugin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni". +* `cni-bin-dir`: Kubelet probes this directory for plugins on startup +* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni". ## Network Plugin Requirements @@ -47,7 +47,7 @@ Kubenet creates a Linux bridge named `cbr0` and creates a veth pair for each pod The plugin requires a few things: -* The standard CNI `bridge`, `lo` and `host-local` plugins are required, at minimum version 0.2.0. Kubenet will first search for them in `/opt/cni/bin`. Specify `network-plugin-dir` to supply additional search path. The first found match will take effect. +* The standard CNI `bridge`, `lo` and `host-local` plugins are required, at minimum version 0.2.0. Kubenet will first search for them in `/opt/cni/bin`. Specify `cni-bin-dir` to supply additional search path. The first found match will take effect. * Kubelet must be run with the `--network-plugin=kubenet` argument to enable the plugin * Kubelet should also be run with the `--non-masquerade-cidr=` argument to ensure traffic to IPs outside this range will use IP masquerade. * The node must be assigned an IP subnet through either the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=` controller-manager command-line options. @@ -69,5 +69,5 @@ This option is provided to the network-plugin; currently **only kubenet supports ## Usage Summary * `--network-plugin=cni` specifies that we use the `cni` network plugin with actual CNI plugin binaries located in `--cni-bin-dir` (default `/opt/cni/bin`) and CNI plugin configuration located in `--cni-conf-dir` (default `/etc/cni/net.d`). -* `--network-plugin=kubenet` specifies that we use the `kubenet` network plugin with CNI `bridge` and `host-local` plugins placed in `/opt/cni/bin` or `network-plugin-dir`. +* `--network-plugin=kubenet` specifies that we use the `kubenet` network plugin with CNI `bridge` and `host-local` plugins placed in `/opt/cni/bin` or `cni-bin-dir`. * `--network-plugin-mtu=9001` specifies the MTU to use, currently only used by the `kubenet` network plugin. From d9603b1b154b5a22bafe06905ebe2f0eeeab40e9 Mon Sep 17 00:00:00 2001 From: CaoShuFeng Date: Fri, 2 Mar 2018 09:13:52 +0800 Subject: [PATCH 066/117] fix golang version (#7569) ref: https://github.com/kubernetes/kubernetes/blob/07240b7166d83bed49d783e0ecdfa7ee7e62cfca/hack/lib/golang.sh#L329 --- docs/home/contribute/generated-reference/federation-api.md | 2 +- docs/home/contribute/generated-reference/kubectl.md | 2 +- docs/home/contribute/generated-reference/kubernetes-api.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/home/contribute/generated-reference/federation-api.md b/docs/home/contribute/generated-reference/federation-api.md index 6a22a664e7d1e..0e31ab7ac9dc7 100644 --- a/docs/home/contribute/generated-reference/federation-api.md +++ b/docs/home/contribute/generated-reference/federation-api.md @@ -17,7 +17,7 @@ Kubernetes Federation API. installed. * You need to have -[Golang](https://golang.org/doc/install) version 1.8 or later installed, +[Golang](https://golang.org/doc/install) version 1.9.1 or later installed, and your `$GOPATH` environment variable must be set. * You need to have diff --git a/docs/home/contribute/generated-reference/kubectl.md b/docs/home/contribute/generated-reference/kubectl.md index 225240d77eaea..ec4c0a3b686b3 100644 --- a/docs/home/contribute/generated-reference/kubectl.md +++ b/docs/home/contribute/generated-reference/kubectl.md @@ -30,7 +30,7 @@ reference page, see installed. * You need to have -[Golang](https://golang.org/doc/install) version 1.8 or later installed, +[Golang](https://golang.org/doc/install) version 1.9.1 or later installed, and your `$GOPATH` environment variable must be set. * You need to have diff --git a/docs/home/contribute/generated-reference/kubernetes-api.md b/docs/home/contribute/generated-reference/kubernetes-api.md index 8ae201095a5ad..c3d9958285278 100644 --- a/docs/home/contribute/generated-reference/kubernetes-api.md +++ b/docs/home/contribute/generated-reference/kubernetes-api.md @@ -15,7 +15,7 @@ Kubernetes API. You need to have these tools installed: * [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) -* [Golang](https://golang.org/doc/install) version 1.8 or later +* [Golang](https://golang.org/doc/install) version 1.9.1 or later * [Docker](https://docs.docker.com/engine/installation/) * [etcd](https://github.com/coreos/etcd/) From a37d125f18064bffa4238ddf8efb1e6b5d603838 Mon Sep 17 00:00:00 2001 From: Abhinandan Prativadi Date: Thu, 1 Mar 2018 17:14:53 -0800 Subject: [PATCH 067/117] Adding cri-containerd to supported cri runtimes (#7571) --- docs/reference/setup-tools/kubeadm/kubeadm-init.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/docs/reference/setup-tools/kubeadm/kubeadm-init.md index 1e2c66c8c2b56..73cede3e6378f 100755 --- a/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -289,6 +289,7 @@ The container runtime used by default is Docker, which is enabled through the bu Other CRI-based runtimes include: +- [cri-containerd](https://github.com/containerd/cri-containerd) - [cri-o](https://github.com/kubernetes-incubator/cri-o) - [frakti](https://github.com/kubernetes/frakti) - [rkt](https://github.com/kubernetes-incubator/rktlet) From 79d294329833cf14548644f46eb49f827e54aaf7 Mon Sep 17 00:00:00 2001 From: CaoShuFeng Date: Fri, 2 Mar 2018 09:15:52 +0800 Subject: [PATCH 068/117] fix case study description (#7572) --- case-studies/blackrock.html | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/case-studies/blackrock.html b/case-studies/blackrock.html index f4610c842d882..670c990c3a347 100644 --- a/case-studies/blackrock.html +++ b/case-studies/blackrock.html @@ -24,7 +24,7 @@

    CASE STUDY:

    Challenge

    - The world’s largest asset manager, BlackRock operates a very controlled static deployment scheme, which has allowed for scalability over the years. But in their data science division, there was a need for more dynamic access to resources. "We want to be able to give every investor access to data science, meaning Python notebooks, or even something much more advanced, like a MapReduce engine based on Spark," says Michael Francis, a Managing Director in BlackRock’s Product Group, which runs the company’s investment management platform. "Managing complex Python installations on users’ desktops is really hard because everyone ends up with slightly different environments. We have existing environments that do these things, but we needed to make it real, expansive and scaleable. Being able to spin that up on demand, tear it down, make that much more dynamic, became a critical thought process for us. It’s not so much that we had to solve our main core production problem, it’s how do we extend that? How do we evolve?" + The world’s largest asset manager, BlackRock operates a very controlled static deployment scheme, which has allowed for scalability over the years. But in their data science division, there was a need for more dynamic access to resources. "We want to be able to give every investor access to data science, meaning Python notebooks, or even something much more advanced, like a MapReduce engine based on Spark," says Michael Francis, a Managing Director in BlackRock’s Product Group, which runs the company’s investment management platform. "Managing complex Python installations on users’ desktops is really hard because everyone ends up with slightly different environments. We have existing environments that do these things, but we needed to make it real, expansive and scalable. Being able to spin that up on demand, tear it down, make that much more dynamic, became a critical thought process for us. It’s not so much that we had to solve our main core production problem, it’s how do we extend that? How do we evolve?"
    @@ -65,7 +65,7 @@

    Impact

    - Still, challenges remain. "If you have a shared cluster, you get this storming herd problem where everyone wants to do the same thing at the same time," says Francis. "You could put limits on it, but you’d have to build an infrastructure to define limits for our processes, and the Python notebooks weren’t really designed for that. We have existing environments that do these things, but we needed to make it real, expansive, and scaleable. Being able to spin that up on demand, tear it down, and make that much more dynamic, became a critical thought process for us."

    + Still, challenges remain. "If you have a shared cluster, you get this storming herd problem where everyone wants to do the same thing at the same time," says Francis. "You could put limits on it, but you’d have to build an infrastructure to define limits for our processes, and the Python notebooks weren’t really designed for that. We have existing environments that do these things, but we needed to make it real, expansive, and scalable. Being able to spin that up on demand, tear it down, and make that much more dynamic, became a critical thought process for us."

    Made up of managers from technology, infrastructure, production operations, development and information security, Francis’s team was able to look at the problem holistically and come up with a solution that made sense for BlackRock. "Our initial straw man was that we were going to build everything using Ansible and run it all using some completely different distributed environment," says Francis. "That would have been absolutely the wrong thing to do. Had we gone off on our own as the dev team and developed this solution, it would have been a very different product. And it would have been very expensive. We would not have gone down the route of running under our existing orchestration system. Because we don’t understand it. These guys [in operations and infrastructure] understand it. Having the multidisciplinary team allowed us to get to the right solutions and that actually meant we didn’t build anywhere near the amount we thought we were going to end up building."

    In search of a solution in which they could manage usage on a user-by-user level, Francis’s team gravitated to Red Hat’s OpenShift Kubernetes offering. The company had already experimented with other cloud-native environments, but the team liked that Kubernetes was open source, and "we felt the winds were blowing in the direction of Kubernetes long term," says Francis. "Typically we make technology choices that we believe are going to be here in 5-10 years’ time, in some form. And right now, in this space, Kubernetes feels like the one that’s going to be there." Adds Uri Morris, Vice President of Production Operations: "When you see that the non-Google committers to Kubernetes overtook the Google committers, that’s an indicator of the momentum."

    Once that decision was made, the major challenge was figuring out how to make Kubernetes work within BlackRock’s existing framework. "It’s about understanding how we can operate, manage and support a platform like this, in addition to tacking it onto our existing technology platform," says Project Manager Michael Maskallis. "All the controls we have in place, the change management process, the software development lifecycle, onboarding processes we go through—how can we do all these things?"

    From 38c496d15b1cff2a7f3ba3d250814529386ad9e3 Mon Sep 17 00:00:00 2001 From: Alex Glikson Date: Thu, 1 Mar 2018 20:19:53 -0500 Subject: [PATCH 069/117] Moved more recent info to the top, fixed a typo (#7579) - Moved details of recent releases (1.8 and onward) to the top - added missing "with" in "compatible with the Kubernetes Container Runtime Interface (CRI)" --- docs/tasks/manage-gpus/scheduling-gpus.md | 127 +++++++++++----------- 1 file changed, 61 insertions(+), 66 deletions(-) diff --git a/docs/tasks/manage-gpus/scheduling-gpus.md b/docs/tasks/manage-gpus/scheduling-gpus.md index e00695924ea01..6e12c298699e3 100644 --- a/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/docs/tasks/manage-gpus/scheduling-gpus.md @@ -9,66 +9,6 @@ across nodes. The support for NVIDIA GPUs was added in v1.6 and has gone through multiple backwards incompatible iterations. This page describes how users can consume GPUs across different Kubernetes versions and the current limitations. -## v1.6 and v1.7 -To enable GPU support in 1.6 and 1.7, a special **alpha** feature gate -`Accelerators` has to be set to true across the system: -`--feature-gates="Accelerators=true"`. It also requires using the Docker -Engine as the container runtime. - -Further, the Kubernetes nodes have to be pre-installed with NVIDIA drivers. -Kubelet will not detect NVIDIA GPUs otherwise. - -When you start Kubernetes components after all the above conditions are true, -Kubernetes will expose `alpha.kubernetes.io/nvidia-gpu` as a schedulable -resource. - -You can consume these GPUs from your containers by requesting -`alpha.kubernetes.io/nvidia-gpu` just like you request `cpu` or `memory`. -However, there are some limitations in how you specify the resource requirements -when using GPUs: -- GPUs are only supposed to be specified in the `limits` section, which means: - * You can specify GPU `limits` without specifying `requests` because - Kubernetes will use the limit as the request value by default. - * You can specify GPU in both `limits` and `requests` but these two values - must be equal. - * You cannot specify GPU `requests` without specifying `limits`. -- Containers (and pods) do not share GPUs. There's no overcommitting of GPUs. -- Each container can request one or more GPUs. It is not possible to request a - fraction of a GPU. - -When using `alpha.kubernetes.io/nvidia-gpu` as the resource, you also have to -mount host directories containing NVIDIA libraries (libcuda.so, libnvidia.so -etc.) to the container. - -Here's an example: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: cuda-vector-add -spec: - restartPolicy: OnFailure - containers: - - name: cuda-vector-add - # https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile - image: "k8s.gcr.io/cuda-vector-add:v0.1" - resources: - limits: - alpha.kubernetes.io/nvidia-gpu: 1 # requesting 1 GPU - volumeMounts: - - name: "nvidia-libraries" - mountPath: "/usr/local/nvidia/lib64" - volumes: - - name: "nvidia-libraries" - hostPath: - path: "/usr/lib/nvidia-375" -``` - -The `Accelerators` feature gate and `alpha.kubernetes.io/nvidia-gpu` resource -works on 1.8 and 1.9 as well. It will be deprecated in 1.10 and removed in -1.11. - ## v1.8 onwards **From 1.8 onwards, the recommended way to consume GPUs is to use [device @@ -98,11 +38,6 @@ when using GPUs: - Each container can request one or more GPUs. It is not possible to request a fraction of a GPU. -Unlike with `alpha.kubernetes.io/nvidia-gpu`, when using `nvidia.com/gpu` as -the resource, you don't have to mount any special directories in your pod -specs. The device plugin is expected to inject them automatically in the -container. - Here's an example: ```yaml @@ -152,7 +87,7 @@ Report issues with this device plugin to [NVIDIA/k8s-device-plugin](https://gith The [NVIDIA GPU device plugin used by GKE/GCE](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu) doesn't require using nvidia-docker and should work with any container runtime -that is compatible the Kubernetes Container Runtime Interface (CRI). It's tested +that is compatible with the Kubernetes Container Runtime Interface (CRI). It's tested on [Container-Optimized OS](https://cloud.google.com/container-optimized-os/) and has experimental code for Ubuntu from 1.9 onwards. @@ -208,6 +143,66 @@ spec: This will ensure that the pod will be scheduled to a node that has the GPU type you specified. +## v1.6 and v1.7 +To enable GPU support in 1.6 and 1.7, a special **alpha** feature gate +`Accelerators` has to be set to true across the system: +`--feature-gates="Accelerators=true"`. It also requires using the Docker +Engine as the container runtime. + +Further, the Kubernetes nodes have to be pre-installed with NVIDIA drivers. +Kubelet will not detect NVIDIA GPUs otherwise. + +When you start Kubernetes components after all the above conditions are true, +Kubernetes will expose `alpha.kubernetes.io/nvidia-gpu` as a schedulable +resource. + +You can consume these GPUs from your containers by requesting +`alpha.kubernetes.io/nvidia-gpu` just like you request `cpu` or `memory`. +However, there are some limitations in how you specify the resource requirements +when using GPUs: +- GPUs are only supposed to be specified in the `limits` section, which means: + * You can specify GPU `limits` without specifying `requests` because + Kubernetes will use the limit as the request value by default. + * You can specify GPU in both `limits` and `requests` but these two values + must be equal. + * You cannot specify GPU `requests` without specifying `limits`. +- Containers (and pods) do not share GPUs. There's no overcommitting of GPUs. +- Each container can request one or more GPUs. It is not possible to request a + fraction of a GPU. + +When using `alpha.kubernetes.io/nvidia-gpu` as the resource, you also have to +mount host directories containing NVIDIA libraries (libcuda.so, libnvidia.so +etc.) to the container. + +Here's an example: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: cuda-vector-add +spec: + restartPolicy: OnFailure + containers: + - name: cuda-vector-add + # https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile + image: "k8s.gcr.io/cuda-vector-add:v0.1" + resources: + limits: + alpha.kubernetes.io/nvidia-gpu: 1 # requesting 1 GPU + volumeMounts: + - name: "nvidia-libraries" + mountPath: "/usr/local/nvidia/lib64" + volumes: + - name: "nvidia-libraries" + hostPath: + path: "/usr/lib/nvidia-375" +``` + +The `Accelerators` feature gate and `alpha.kubernetes.io/nvidia-gpu` resource +works on 1.8 and 1.9 as well. It will be deprecated in 1.10 and removed in +1.11. + ## Future - Support for hardware accelerators in Kubernetes is still in alpha. - Better APIs will be introduced to provision and consume accelerators in a scalable manner. From af4e718b4c788dc743e2e77e314a656b58efdb07 Mon Sep 17 00:00:00 2001 From: "Peter (XiangPeng) Zhao" Date: Fri, 2 Mar 2018 12:30:51 +0800 Subject: [PATCH 070/117] Remove no longer existing log message in kubeadm. (#7585) --- docs/setup/independent/create-cluster-kubeadm.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/setup/independent/create-cluster-kubeadm.md b/docs/setup/independent/create-cluster-kubeadm.md index 5cecf8df4e2d3..b8ddcadcff249 100644 --- a/docs/setup/independent/create-cluster-kubeadm.md +++ b/docs/setup/independent/create-cluster-kubeadm.md @@ -144,7 +144,6 @@ see [Tear Down](#tear-down). The output should look like: ``` -[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.8.0 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks @@ -394,7 +393,6 @@ kubeadm join --token : --discovery-token-ca-cert The output should look something like: ``` -[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [preflight] Running pre-flight checks [discovery] Trying to connect to API Server "10.138.0.4:6443" [discovery] Created cluster-info discovery client, requesting info from "https://10.138.0.4:6443" From a84db20babfa7848e71d6e61b7d8448dc20dbbb6 Mon Sep 17 00:00:00 2001 From: Philip Mallory Date: Fri, 2 Mar 2018 16:13:53 -0800 Subject: [PATCH 071/117] Update MacOS instructions to use curl (#7613) MacOS comes with curl which makes it more convenient than wget. This change is in response to #7275 --- docs/getting-started-guides/kops.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/getting-started-guides/kops.md b/docs/getting-started-guides/kops.md index 040ea6aaba734..f3e489df41b6c 100644 --- a/docs/getting-started-guides/kops.md +++ b/docs/getting-started-guides/kops.md @@ -34,7 +34,7 @@ Download kops from the [releases page](https://github.com/kubernetes/kops/releas On MacOS: ``` -wget https://github.com/kubernetes/kops/releases/download/1.8.0/kops-darwin-amd64 +curl -OL https://github.com/kubernetes/kops/releases/download/1.8.0/kops-darwin-amd64 chmod +x kops-darwin-amd64 mv kops-darwin-amd64 /usr/local/bin/kops # you can also install using Homebrew From 0e6f7460ff2b165d05688dd318ac74a9e1c373f5 Mon Sep 17 00:00:00 2001 From: Garry Shutler Date: Sat, 3 Mar 2018 18:46:54 +0000 Subject: [PATCH 072/117] Fixed typo (#7603) * Fixed typo * Fixed invalid concurrencyPolicy value --- docs/concepts/workloads/controllers/cron-jobs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/workloads/controllers/cron-jobs.md b/docs/concepts/workloads/controllers/cron-jobs.md index 26620c55c7ba2..58f1664c65680 100644 --- a/docs/concepts/workloads/controllers/cron-jobs.md +++ b/docs/concepts/workloads/controllers/cron-jobs.md @@ -124,7 +124,7 @@ are certain circumstances where two jobs might be created, or no job might be cr but do not completely prevent them. Therefore, jobs should be _idempotent_. If `startingDeadlineSeconds` is set to a large value or left unset (the default) -and if `concurrentPolicy` is set to `AllowConcurrent`, the jobs will always run +and if `concurrencyPolicy` is set to `Allow`, the jobs will always run at least once. Jobs may fail to run if the CronJob controller is not running or broken for a From 92d0dec8375eeca2f7e5095ca52595ec17417dcf Mon Sep 17 00:00:00 2001 From: Michelle Au Date: Sat, 3 Mar 2018 11:12:53 -0800 Subject: [PATCH 073/117] Fix indentation for local StorageClass (#7609) --- docs/concepts/storage/storage-classes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/storage/storage-classes.md b/docs/concepts/storage/storage-classes.md index 6353fdb0beddb..d323c3ac1724c 100644 --- a/docs/concepts/storage/storage-classes.md +++ b/docs/concepts/storage/storage-classes.md @@ -638,7 +638,7 @@ and referenced with the `adminSecretNamespace` parameter. Secrets used by pre-provisioned volumes must be created in the same namespace as the PVC that references it. -#### Local +### Local {% assign for_k8s_version="v1.9" %}{% include feature-state-alpha.md %} From 75aea2f0eccaeea3c4e32b5a62d24025cc35c7c3 Mon Sep 17 00:00:00 2001 From: Aivars Sterns Date: Sat, 3 Mar 2018 21:33:53 +0200 Subject: [PATCH 074/117] Update kubespray docs (#7601) * Removed mention of kubespray-cli as mention has been removed from kubespray * changed descriptions and requirements --- docs/getting-started-guides/kubespray.md | 21 ++++++++------------- 1 file changed, 8 insertions(+), 13 deletions(-) diff --git a/docs/getting-started-guides/kubespray.md b/docs/getting-started-guides/kubespray.md index f5c137cc7e02c..9ee3443038866 100644 --- a/docs/getting-started-guides/kubespray.md +++ b/docs/getting-started-guides/kubespray.md @@ -10,7 +10,7 @@ Kubespray is a composition of [Ansible](http://docs.ansible.com/) playbooks, [in * a highly available cluster * composable attributes -* support for most popular Linux distributions +* support for most popular Linux distributions (CoreOS, Debian Jessie, Ubuntu 16.04, CentOS/RHEL 7) * continuous integration tests To choose a tool which best fits your use case, read [this comparison](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/comparisons.md) to [kubeadm](/docs/admin/kubeadm/) and [kops](../kops). @@ -21,7 +21,7 @@ To choose a tool which best fits your use case, read [this comparison](https://g Provision servers with the following requirements: -* `Ansible v2.3` (or newer) +* `Ansible v2.4` (or newer) * `Jinja 2.9` (or newer) * `python-netaddr` installed on the machine that running Ansible commands * Target servers must have access to the Internet in order to pull docker images @@ -37,10 +37,6 @@ Kubespray provides the following utilities to help provision your environment: * [Terraform](https://www.terraform.io/) scripts for the following cloud providers: * [AWS](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/aws) * [OpenStack](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/openstack) -* [kubespray-cli](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md) - -**Note:** kubespray-cli is no longer actively maintained. -{: .note} ### (2/5) Compose an inventory file @@ -62,15 +58,14 @@ Kubespray customizations can be made to a [variable file](http://docs.ansible.co ### (4/5) Deploy a Cluster -Next, deploy your cluster with one of two methods: - -* [ansible-playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment). -* [kubespray-cli tool](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md) +Next, deploy your cluster: -**Note:** kubespray-cli is no longer actively maintained. -{: .note} +Cluster deployment using [ansible-playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment). +```console +ansible-playbook -i your/inventory/hosts.ini cluster.yml -b -v \ + --private-key=~/.ssh/private_key +``` -Both methods run the default [cluster definition file](https://github.com/kubernetes-incubator/kubespray/blob/master/cluster.yml). Large deployments (100+ nodes) may require [specific adjustments](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/large-deployments.md) for best results. From 46d896e518062be0bd102a1ee3522857c8199a83 Mon Sep 17 00:00:00 2001 From: Alex Glikson Date: Sat, 3 Mar 2018 14:41:53 -0500 Subject: [PATCH 075/117] Use standard stress utility in examples (#7582) In memory allocation examples, replaced the use of custom stress utility with the standard one. --- .../configure-pod-container/assign-memory-resource.md | 2 +- .../memory-request-limit-2.yaml | 11 +++-------- .../memory-request-limit-3.yaml | 11 +++-------- .../configure-pod-container/memory-request-limit.yaml | 11 +++-------- 4 files changed, 10 insertions(+), 25 deletions(-) diff --git a/docs/tasks/configure-pod-container/assign-memory-resource.md b/docs/tasks/configure-pod-container/assign-memory-resource.md index 68ef9c257df8b..891e1acc5e9f2 100644 --- a/docs/tasks/configure-pod-container/assign-memory-resource.md +++ b/docs/tasks/configure-pod-container/assign-memory-resource.md @@ -67,7 +67,7 @@ for the Pod: {% include code.html language="yaml" file="memory-request-limit.yaml" ghlink="/docs/tasks/configure-pod-container/memory-request-limit.yaml" %} In the configuration file, the `args` section provides arguments for the Container when it starts. -The `-mem-total 150Mi` argument tells the Container to attempt to allocate 150 MiB of memory. +The `"--vm-bytes", "150M"` arguments tell the Container to attempt to allocate 150 MiB of memory. Create the Pod: diff --git a/docs/tasks/configure-pod-container/memory-request-limit-2.yaml b/docs/tasks/configure-pod-container/memory-request-limit-2.yaml index 38376da8835f4..99032c4fc2adc 100644 --- a/docs/tasks/configure-pod-container/memory-request-limit-2.yaml +++ b/docs/tasks/configure-pod-container/memory-request-limit-2.yaml @@ -6,16 +6,11 @@ metadata: spec: containers: - name: memory-demo-2-ctr - image: vish/stress + image: polinux/stress resources: requests: memory: "50Mi" limits: memory: "100Mi" - args: - - -mem-total - - 250Mi - - -mem-alloc-size - - 10Mi - - -mem-alloc-sleep - - 1s + command: ["stress"] + args: ["--vm", "1", "--vm-bytes", "250M", "--vm-hang", "1"] diff --git a/docs/tasks/configure-pod-container/memory-request-limit-3.yaml b/docs/tasks/configure-pod-container/memory-request-limit-3.yaml index 768e83701d1cf..9f089c4a7a2be 100644 --- a/docs/tasks/configure-pod-container/memory-request-limit-3.yaml +++ b/docs/tasks/configure-pod-container/memory-request-limit-3.yaml @@ -6,16 +6,11 @@ metadata: spec: containers: - name: memory-demo-3-ctr - image: vish/stress + image: polinux/stress resources: limits: memory: "1000Gi" requests: memory: "1000Gi" - args: - - -mem-total - - 150Mi - - -mem-alloc-size - - 10Mi - - -mem-alloc-sleep - - 1s + command: ["stress"] + args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"] diff --git a/docs/tasks/configure-pod-container/memory-request-limit.yaml b/docs/tasks/configure-pod-container/memory-request-limit.yaml index 159c825904e91..985b1308d9a00 100644 --- a/docs/tasks/configure-pod-container/memory-request-limit.yaml +++ b/docs/tasks/configure-pod-container/memory-request-limit.yaml @@ -6,16 +6,11 @@ metadata: spec: containers: - name: memory-demo-ctr - image: vish/stress + image: polinux/stress resources: limits: memory: "200Mi" requests: memory: "100Mi" - args: - - -mem-total - - 150Mi - - -mem-alloc-size - - 10Mi - - -mem-alloc-sleep - - 1s + command: ["stress"] + args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"] From 550c26b9293676f9181ba49b54bd7ba762dd91d2 Mon Sep 17 00:00:00 2001 From: the0ffh Date: Sat, 3 Mar 2018 20:42:53 +0100 Subject: [PATCH 076/117] Update networking.md (#7578) Clarified 'pod container' description. --- docs/concepts/cluster-administration/networking.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/cluster-administration/networking.md b/docs/concepts/cluster-administration/networking.md index 7f39a3cbfe229..7af520617e379 100644 --- a/docs/concepts/cluster-administration/networking.md +++ b/docs/concepts/cluster-administration/networking.md @@ -85,7 +85,7 @@ network namespaces - including their IP address. This means that containers within a `Pod` can all reach each other's ports on `localhost`. This does imply that containers within a `Pod` must coordinate port usage, but this is no different than processes in a VM. This is called the "IP-per-pod" model. This -is implemented in Docker as a "pod container" which holds the network namespace +is implemented, using Docker, as a "pod container" which holds the network namespace open while "app containers" (the things the user specified) join that namespace with Docker's `--net=container:` function. From c97a625547fe91102dc395f45fc4433eb96df4f1 Mon Sep 17 00:00:00 2001 From: Cheng Xing Date: Sat, 3 Mar 2018 11:46:54 -0800 Subject: [PATCH 077/117] Moving CSI to Out of Tree section; linking to out of tree plugin FAQ (#7564) --- docs/concepts/storage/volumes.md | 72 ++++++++++++++++---------------- 1 file changed, 37 insertions(+), 35 deletions(-) diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index 3881dc8e54fec..d298cfbd9a539 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -215,38 +215,6 @@ keyed with `log_level`. receive ConfigMap updates. {: .note} -### csi - -CSI stands for [Container Storage Interface](https://github.com/container-storage-interface/spec/blob/master/spec.md), -a specification attempting to establish an industry standard interface that -Container Orchestration Systems (COs) can use to expose arbitrary storage systems -to their container workloads. -For more information about the details, please check the -[design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md). - - -The `csi` volume type is an in-tree CSI volume plugin for Pods to interact -with external CSI volume drivers running on the same node. -After having deployed a CSI compatible volume driver, users can use `csi` as the -volume type to mount the storage provided by the driver. - -CSI persistent volume support is introduced in Kubernetes v1.9 as an alpha feature -which has to be explicitly enabled by the cluster administrator. In other words, -the cluster administrator needs to add "`CSIPersistentVolume=true`" to the -"`--feature-gates=`" flag for the apiserver, the controller-manager and the kubelet -components. - -A CSI persistent volume has the following fields for users to specify: - -- `driver`: A string value that specifies the name of the volume driver to use. - It has to be less than 63 characters and starts with a character. The driver - name can have '`.`', '`-`', '`_`' or digits in it. -- `volumeHandle`: A string value that uniquely identify the volume name returned - from the CSI volume plugin's `CreateVolume` call. The volume handle is then - used in all subsequent calls to the volume driver for referencing the volume. -- `readOnly`: An optional boolean value indicating whether the volume is to be - published as read only. Default is false. - ### downwardAPI A `downwardAPI` volume is used to make downward API data available to applications. @@ -990,16 +958,50 @@ several media types. ## Out-of-Tree Volume Plugins In addition to the previously listed volume types, storage vendors may create custom plugins without adding it to the Kubernetes repository. This can be -achieved by using the `FlexVolume` plugin. +achieved by using either the `CSI` plugin or the `FlexVolume` plugin. + +For storage vendors looking to create an out-of-tree volume plugin, [please refer to this FAQ](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md) for choosing between the plugin options. +### CSI + +CSI stands for [Container Storage Interface](https://github.com/container-storage-interface/spec/blob/master/spec.md), +a specification attempting to establish an industry standard interface that +container orchestration systems can use to expose arbitrary storage systems +to their container workloads. +Please read +[CSI design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md) for further information. + + +The `csi` volume type is an in-tree CSI volume plugin for Pods to interact +with external CSI volume drivers running on the same node. +After having deployed a CSI compatible volume driver, users can use `csi` as the +volume type to mount the storage provided by the driver. + +CSI persistent volume support is an alpha feature in Kubernetes v1.9 and requires a +cluster administrator to enable it. To enable CSI persistent volume support, the +cluster administrator adds `CSIPersistentVolume=true` to the `--feature-gates` flag +for apiserver, controller-manager, and kubelet. + +The following fields are available to storage administrators to configure a CSI +persistent volume: + +- `driver`: A string value that specifies the name of the volume driver to use. + It has to be less than 63 characters and starts with a character. The driver + name can have '`.`', '`-`', '`_`' or digits in it. +- `volumeHandle`: A string value that uniquely identify the volume name returned + from the CSI volume plugin's `CreateVolume` call. The volume handle is then + used in all subsequent calls to the volume driver for referencing the volume. +- `readOnly`: An optional boolean value indicating whether the volume is to be + published as read only. Default is false. + +### FlexVolume `FlexVolume` enables users to mount vendor volumes into a pod. The vendor plugin is implemented using a driver, an executable supporting a list of volume commands defined by the `FlexVolume` API. Drivers must be installed in a pre-defined -volume plugin path on each node. +volume plugin path on each node. Pods interact with FlexVolume drivers through the `flexVolume` in-tree plugin. More details can be found [here](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md). - ## Mount propagation **Note:** Mount propagation is an alpha feature in Kubernetes 1.8 and may be From 5995f24426bc1cb9b9c6c89dd360dc6c6fab2812 Mon Sep 17 00:00:00 2001 From: yulng Date: Sun, 4 Mar 2018 03:52:53 +0800 Subject: [PATCH 078/117] Please note N/A's meaning (#7553) * Please explain N/A's meaning Because N/A's meaning probably is Not Available/ Not applicable or Name and Address * all "N/A" change to "Not Available" all "N/A" change to "Not Available" * "Available" change to "Applicable" all "Available" change to "Applicable" --- _plugins/README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/_plugins/README.md b/_plugins/README.md index ba8d00ca3e67d..0294efaf5b751 100644 --- a/_plugins/README.md +++ b/_plugins/README.md @@ -25,7 +25,7 @@ This renders the definition of the glossary term inside a `
    `, preserving Ma | Name | Default | Description | | --- | --- | --- | -| `term_id` | N/A (Required) | The `id` of the glossary term whose definition will be used. (This `id` is the same as the filename of the term, i.e. `_data/glossary/.yml`.) | +| `term_id` | Not Applicable (Required) | The `id` of the glossary term whose definition will be used. (This `id` is the same as the filename of the term, i.e. `_data/glossary/.yml`.) | | `length` | "short" | Specifies which term definition should be used ("short" for the `short-definition`, "long" for `long-description`, "all" when both should be included). | | `prepend` | "Service Catalog is" | A prefix which can be attached in front of a term's short definition (which is one or more sentence fragments). | @@ -49,7 +49,7 @@ This renders the following: | Name | Default | Description | | --- | --- | --- | | `text` | the `name` of the glossary term | The text that the user will hover over to display the glossary definition. **You should include this if using the tooltip inside of a glossary term's YAML short-definition.** | -| `term_id` | N/A (Required) | The `id` of the associated glossary term. (This `id` is the same as the filename of the term, i.e. `_data/glossary/.yml`.) | +| `term_id` | Not Applicable (Required) | The `id` of the associated glossary term. (This `id` is the same as the filename of the term, i.e. `_data/glossary/.yml`.) | #### (3) `glossary_injector` tag @@ -73,6 +73,6 @@ This renders the following: | Name | Default | Description | | --- | --- | --- | | `text` | the `name` of the glossary term | The text that the user will hover over to display the glossary definition. | -| `term_id` | N/A (Required) | The `id` of the glossary term whose definition will be used. (This `id` is the same as the filename of the term, i.e. `_data/glossary/.yml`.) | -| `placeholder_id` | N/A (Required) | The `id` of the HTML element whose contents will be populated with the definition of `term_id` | +| `term_id` | Not Applicable (Required) | The `id` of the glossary term whose definition will be used. (This `id` is the same as the filename of the term, i.e. `_data/glossary/.yml`.) | +| `placeholder_id` | Not Applicable (Required) | The `id` of the HTML element whose contents will be populated with the definition of `term_id` | | `length` | "short" | Specifies which term definition should be used ("short" for the `short-definition`, "long" for `long-description`, "all" when both should be included). | From 2a4980804914946d6c4063b1433077eb47d3c6f6 Mon Sep 17 00:00:00 2001 From: Logan Rakai Date: Sat, 3 Mar 2018 12:53:52 -0700 Subject: [PATCH 079/117] Increase consistency, and style guide compliance (#7503) - not using object after API objects per https://kubernetes.io/docs/home/contribute/style-guide/#use-camel-case-for-api-objects --- docs/concepts/policy/resource-quotas.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/docs/concepts/policy/resource-quotas.md b/docs/concepts/policy/resource-quotas.md index 9253d3d602ba1..e44091700c7c6 100644 --- a/docs/concepts/policy/resource-quotas.md +++ b/docs/concepts/policy/resource-quotas.md @@ -18,15 +18,15 @@ Resource quotas work like this: - Different teams work in different namespaces. Currently this is voluntary, but support for making this mandatory via ACLs is planned. -- The administrator creates one or more Resource Quota objects for each namespace. +- The administrator creates one or more `ResourceQuotas` for each namespace. - Users create resources (pods, services, etc.) in the namespace, and the quota system - tracks usage to ensure it does not exceed hard resource limits defined in a Resource Quota. + tracks usage to ensure it does not exceed hard resource limits defined in a `ResourceQuota`. - If creating or updating a resource violates a quota constraint, the request will fail with HTTP status code `403 FORBIDDEN` with a message explaining the constraint that would have been violated. - If quota is enabled in a namespace for compute resources like `cpu` and `memory`, users must specify requests or limits for those values; otherwise, the quota system may reject pod creation. Hint: Use the LimitRange admission controller to force defaults for pods that make no compute resource requirements. - See the [walkthrough](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) for an example to avoid this problem. + See the [walkthrough](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) for an example of how to avoid this problem. Examples of policies that could be created using namespaces and quotas are: @@ -42,12 +42,12 @@ Neither contention nor changes to quota will affect already created resources. ## Enabling Resource Quota -Resource Quota support is enabled by default for many Kubernetes distributions. It is +Resource quota support is enabled by default for many Kubernetes distributions. It is enabled when the apiserver `--admission-control=` flag has `ResourceQuota` as one of its arguments. -Resource Quota is enforced in a particular namespace when there is a -`ResourceQuota` object in that namespace. +A resource quota is enforced in a particular namespace when there is a +`ResourceQuota` in that namespace. ## Compute Resource Quota @@ -83,7 +83,7 @@ define a quota as follows: * `gold.storageclass.storage.k8s.io/requests.storage: 500Gi` * `bronze.storageclass.storage.k8s.io/requests.storage: 100Gi` -In release 1.8, quota support for local ephemeral storage is added as alpha feature +In release 1.8, quota support for local ephemeral storage is added as an alpha feature: | Resource Name | Description | | ------------------------------- |----------------------------------------------------------- | @@ -134,7 +134,7 @@ The following types are supported: | `secrets` | The total number of secrets that can exist in the namespace. | For example, `pods` quota counts and enforces a maximum on the number of `pods` -created in a single namespace that are not terminal. You might want to set a `pods` +created in a single namespace that are not terminal. You might want to set a `pods` quota on a namespace to avoid the case where a user creates many small pods and exhausts the cluster's supply of Pod IPs. @@ -264,7 +264,7 @@ count/secrets 1 4 ## Quota and Cluster Capacity -Resource Quota objects are independent of the Cluster Capacity. They are +`ResourceQuotas` are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does *not* automatically give each namespace the ability to consume more resources. @@ -275,8 +275,8 @@ Sometimes more complex policies may be desired, such as: limit to prevent accidental resource exhaustion. - Detect demand from one namespace, add nodes, and increase quota. -Such policies could be implemented using ResourceQuota as a building-block, by -writing a 'controller' which watches the quota usage and adjusts the quota +Such policies could be implemented using `ResourceQuotas` as building blocks, by +writing a "controller" that watches the quota usage and adjusts the quota hard limits of each namespace according to other signals. Note that resource quota divides up aggregate cluster resources, but it creates no From bf661f9f444626961d73ee37a1bad575d0e15e31 Mon Sep 17 00:00:00 2001 From: Loic Nageleisen Date: Sat, 3 Mar 2018 21:13:54 +0100 Subject: [PATCH 080/117] Add a section about routing errors (#7078) A common source of beffudlement when using local hypervisors or cloud providers with peculiar interface setups, IP adressing, or network policies for which kubelet cannot guess the right IP to use. `awk` is being used so that the example works in CoreOS Container Linux too. Requested in kubernetes/kubeadm#203. --- .../independent/troubleshooting-kubeadm.md | 34 +++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/docs/setup/independent/troubleshooting-kubeadm.md b/docs/setup/independent/troubleshooting-kubeadm.md index 8c88a1bc66173..590dc3e1232f4 100644 --- a/docs/setup/independent/troubleshooting-kubeadm.md +++ b/docs/setup/independent/troubleshooting-kubeadm.md @@ -181,3 +181,37 @@ If you're using flannel as the pod network inside vagrant, then you will have to Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed. This may lead to problems with flannel. By default, flannel selects the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this issue, pass the `--iface eth1` flag to flannel so that the second interface is chosen. + +### Routing errors + +In some situations `kubectl logs` and `kubectl run` commands may return with the following errors despite an otherwise apparently correctly working cluster: + +``` +Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host +``` + +This is due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider. As an example, Digital Ocean assigns a public IP to `eth0` as well as a private one to be used internally as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's `InternalIP` instead of the public one. + +Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will not display the offending alias IP address. Alternatively an API endpoint specific to Digital Ocean allows to query for the anchor IP from the droplet: + +``` +curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address +``` + +The workaround is to tell `kubelet` which IP to use using `--node-ip`. When using Digital Ocean, it can be the public one (assigned to `eth0`) or the private one (assigned to `eth1`) should you want to use the optional private network. For example: + +``` +IFACE=eth0 # change to eth1 for DO's private network +DROPLET_IP_ADDRESS=$(ip addr show dev $IFACE | awk 'match($0,/inet (([0-9]|\.)+).* scope global/,a) { print a[1]; exit }') +echo $DROPLET_IP_ADDRESS # check this, just in case +echo "Environment=\"KUBELET_EXTRA_ARGS=--node-ip=$DROPLET_IP_ADDRESS\"" >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +``` + +Please note that this assumes `KUBELET_EXTRA_ARGS` hasn't already been set in the unit file. + +Then restart `kubelet`: + +``` +systemctl daemon-reload +systemctl restart kubelet +``` From 00c3eb0e882392d745a92f1f60cbe43f2672176b Mon Sep 17 00:00:00 2001 From: Ahmet Alp Balkan Date: Sat, 3 Mar 2018 12:44:55 -0800 Subject: [PATCH 081/117] Remove "design docs" from /docs/reference sidebar (#6410) This list on the sidebar isn't very useful: - it's not an exhaustive list - it's not an up-to-date list either - design docs are not documentation (they're already stale) - we already link to the full list from https://kubernetes.io/docs/reference/ home page Also removing the 'docs/admin/ovs-networking.md' document as per the pull request comments, it's no longer necessary. Signed-off-by: Ahmet Alp Balkan --- _data/reference.yml | 16 --------------- docs/admin/ovs-networking.md | 21 ------------------- docs/reference/design-docs/overview.md | 28 -------------------------- 3 files changed, 65 deletions(-) delete mode 100644 docs/admin/ovs-networking.md delete mode 100644 docs/reference/design-docs/overview.md diff --git a/_data/reference.yml b/_data/reference.yml index 65ae449f1b2fb..879164011354a 100644 --- a/_data/reference.yml +++ b/_data/reference.yml @@ -103,22 +103,6 @@ toc: - docs/reference/generated/federation-apiserver.md - docs/reference/generated/federation-controller-manager.md -- title: Kubernetes Design Docs - landing_page: /docs/reference/design-docs/overview/ - section: - - docs/reference/design-docs/overview.md - - title: Kubernetes Architecture - path: https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md - - title: Kubernetes Design Overview - path: https://github.com/kubernetes/kubernetes/tree/release-1.6/docs/design - - title: Kubernetes Identity and Access Management - path: https://git.k8s.io/community/contributors/design-proposals/auth/access.md - - docs/admin/ovs-networking.md - - title: Security Contexts - path: https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md - - title: Security in Kubernetes - path: https://git.k8s.io/community/contributors/design-proposals/auth/security.md - - title: Kubernetes Issues and Security landing_page: https://github.com/kubernetes/kubernetes/issues/ section: diff --git a/docs/admin/ovs-networking.md b/docs/admin/ovs-networking.md deleted file mode 100644 index 86085e6d4060a..0000000000000 --- a/docs/admin/ovs-networking.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -reviewers: -- thockin -title: Kubernetes OpenVSwitch GRE/VxLAN networking ---- - -This document describes how OpenVSwitch is used to setup networking between pods across nodes. -The tunnel type could be GRE or VxLAN. VxLAN is preferable when large scale isolation needs to be performed within the network. - -![OVS Networking](/images/docs/ovs-networking.png) - -The vagrant setup in Kubernetes does the following: - -The docker bridge is replaced with a brctl generated linux bridge (kbr0) with a 256 address space subnet. Basically, a node gets 10.244.x.0/24 subnet and docker is configured to use that bridge instead of the default docker0 bridge. - -Also, an OVS bridge is created(obr0) and added as a port to the kbr0 bridge. All OVS bridges across all nodes are linked with GRE tunnels. So, each node has an outgoing GRE tunnel to all other nodes. It does not need to be a complete mesh really, just meshier the better. STP (spanning tree) mode is enabled in the bridges to prevent loops. - -Routing rules enable any 10.244.0.0/16 target to become reachable via the OVS bridge connected with the tunnels. - - - diff --git a/docs/reference/design-docs/overview.md b/docs/reference/design-docs/overview.md deleted file mode 100644 index f6e228e4267c8..0000000000000 --- a/docs/reference/design-docs/overview.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: Overview of Kubernetes Design Docs ---- - -{% capture overview %} - -Here are some documents that describe aspects of the Kubernetes design: - -{% endcapture %} - -{% capture body %} - -* [Kubernetes Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/architecture.md) - -* [Kubernetes Design Overview](https://github.com/kubernetes/kubernetes/tree/release-1.6/docs/design) - -* [Kubernetes Identity and Access Management](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/auth/access.md) - -* [Kubernetes OpenVSwitch GRE/VxLAN networking](https://deploy-preview-6994--kubernetes-io-user-journeys.netlify.com/docs/admin/ovs-networking/) - -* [Security Contexts](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/auth/security_context.md) - -* [Security in Kubernetes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/auth/security.md) - -{% endcapture %} - - -{% include templates/concept.md %} From 44df1d9d0bbc45ad6203449073b6167c4dfce99c Mon Sep 17 00:00:00 2001 From: Joseph Heck Date: Sat, 3 Mar 2018 17:28:53 -0800 Subject: [PATCH 082/117] adding how-to for making glossary tooltips (#7014) * adding how-to for making glossary tooltips to include docs for contribution * cross reference from style guide * include example render --- docs/home/contribute/includes.md | 19 +++++++++++++++++-- docs/home/contribute/style-guide.md | 2 +- 2 files changed, 18 insertions(+), 3 deletions(-) diff --git a/docs/home/contribute/includes.md b/docs/home/contribute/includes.md index f26196f1fb3ca..0f6099085fd79 100644 --- a/docs/home/contribute/includes.md +++ b/docs/home/contribute/includes.md @@ -60,9 +60,24 @@ changed by setting the for_k8s_version variable. {{ "{% include feature-state-deprecated.md " }}%} ```` +## Glossary + +You can reference glossary terms with an inclusion that will automatically update and replace content with the relevant links from [our glossary](docs/reference/glossary/). When the term is moused-over by someone +using the online documentation, the glossary entry will display a tooltip. + +The raw data for glossary terms is stored at [https://github.com/kubernetes/website/tree/master/_data/glossary](https://github.com/kubernetes/website/tree/master/_data/glossary), with a YAML file for each glossary term. + +### Glossary Demo + +For example, the following include within the markdown will render to {% glossary_tooltip text="cluster" term_id="cluster" %} with a tooltip: + +````liquid +{{ "{% glossary_tooltip text=" }}"cluster" term_id="cluster" %} +```` + ## Tabs -In a markdown page (.md file) on this site, you can add a tab set to display multiple flavors of a given solution. +In a markdown page (`.md` file) on this site, you can add a tab set to display multiple flavors of a given solution. ### Tabs demo @@ -160,7 +175,7 @@ The `capture [variable_name]` tags store text or markdown content and assign the {{ "{% assign tab_names = 'Default,Calico,Flannel,Romana,Weave Net' | split: ',' | compact " }}%} ```` -The `assign tab_names` tag takes a list of labels to use for the tabs. Label text can include spaces. The given comma delimited string is split into an array and assigned to the `tab_names` variable. +The `assign tab_names` tag takes a list of labels to use for the tabs. Label text can include spaces. The given comma delimited string is split into an array and assigned to the `tab_names` variable. #### Assigning tab contents diff --git a/docs/home/contribute/style-guide.md b/docs/home/contribute/style-guide.md index f616892d6f3ed..50432a35ba7a3 100644 --- a/docs/home/contribute/style-guide.md +++ b/docs/home/contribute/style-guide.md @@ -15,7 +15,7 @@ docs, follow the instructions on {% capture body %} -**Note:** Kubernetes documentation uses [GitHub Flavored Markdown](https://github.github.com/gfm/). +**Note:** Kubernetes documentation uses [GitHub Flavored Markdown](https://github.github.com/gfm/) along with a few [local jekyll includes](/docs/home/contribute/includes/) to support glossary entries, tabs, and representing feature state. {: .note} ## Language From 4ea101d9b055f3370950ec0573dad1acb0a9a31f Mon Sep 17 00:00:00 2001 From: Moussa Taifi Date: Sun, 4 Mar 2018 15:14:51 -0500 Subject: [PATCH 083/117] Add section in tasks => configmaps with examples for --from-env-file (#6777) * Add section in tasks => configmaps with examples for --from-env-file post #6648 * fix language to match style guide --- .../configure-pod-configmap.md | 69 +++++++++++++++++++ .../game-env-file.properties | 5 ++ .../ui-env-file.properties | 3 + 3 files changed, 77 insertions(+) create mode 100644 docs/tasks/configure-pod-container/game-env-file.properties create mode 100644 docs/tasks/configure-pod-container/ui-env-file.properties diff --git a/docs/tasks/configure-pod-container/configure-pod-configmap.md b/docs/tasks/configure-pod-container/configure-pod-configmap.md index ed36a60a0a41a..5175ae9829060 100644 --- a/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -143,6 +143,75 @@ game.properties: 158 bytes ui.properties: 83 bytes ``` +Use the option `--from-env-file` to create a ConfigMap from an env-file, for example: +```shell +# Env-files contain a list of environment variables. +# These syntax rules apply: +# Each line in an env file has to be in VAR=VAL format. +# Lines beginning with # (i.e. comments) are ignored. +# Blank lines are ignored. +# There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)). + + +cat docs/tasks/configure-pod-container/game-env-file.properties +enemies=aliens +lives=3 +allowed="true" + +# This comment and the empty line above it are ignored +``` + +```shell +kubectl create configmap game-config-env-file \ + --from-env-file=docs/tasks/configure-pod-container/game-env-file.properties +``` + +would produce the following ConfigMap: + +```shell +kubectl get configmap game-config-env-file -o yaml +apiVersion: v1 +data: + allowed: '"true"' + enemies: aliens + lives: "3" +kind: ConfigMap +metadata: + creationTimestamp: 2017-12-27T18:36:28Z + name: game-config-env-file + namespace: default + resourceVersion: "809965" + selfLink: /api/v1/namespaces/default/configmaps/game-config-env-file + uid: d9d1ca5b-eb34-11e7-887b-42010a8002b8 +``` + +When passing `--from-env-file` multiple times to create a ConfigMap from multiple data sources, only the last env-file is used: + +```shell +kubectl create configmap config-multi-env-files \ + --from-env-file=docs/tasks/configure-pod-container/game-env-file.properties \ + --from-env-file=docs/tasks/configure-pod-container/ui-env-file.properties +``` + +would produce the following ConfigMap: + +``` +kubectl get configmap config-multi-env-files -o yaml +apiVersion: v1 +data: + color: purple + how: fairlyNice + textmode: "true" +kind: ConfigMap +metadata: + creationTimestamp: 2017-12-27T18:38:34Z + name: config-multi-env-files + namespace: default + resourceVersion: "810136" + selfLink: /api/v1/namespaces/default/configmaps/config-multi-env-files + uid: 252c4572-eb35-11e7-887b-42010a8002b8 +``` + #### Define the key to use when creating a ConfigMap from a file You can define a key other than the file name to use in the `data` section of your ConfigMap when using the `--from-file` argument: diff --git a/docs/tasks/configure-pod-container/game-env-file.properties b/docs/tasks/configure-pod-container/game-env-file.properties new file mode 100644 index 0000000000000..a96a12eaa721c --- /dev/null +++ b/docs/tasks/configure-pod-container/game-env-file.properties @@ -0,0 +1,5 @@ +enemies=aliens +lives=3 +allowed="true" + +# This comment and the empty line above it are ignored diff --git a/docs/tasks/configure-pod-container/ui-env-file.properties b/docs/tasks/configure-pod-container/ui-env-file.properties new file mode 100644 index 0000000000000..1b5c76999497d --- /dev/null +++ b/docs/tasks/configure-pod-container/ui-env-file.properties @@ -0,0 +1,3 @@ +color=purple +textmode=true +how=fairlyNice From 819af04c3c8013013f84c9f398be00072a6812d8 Mon Sep 17 00:00:00 2001 From: Gray Date: Sun, 4 Mar 2018 15:17:52 -0500 Subject: [PATCH 084/117] Update minikube.md (#6922) When the container is being created, the ready number is 0/1. --- docs/getting-started-guides/minikube.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/getting-started-guides/minikube.md b/docs/getting-started-guides/minikube.md index 231b6affe4158..598b2521a1cf2 100644 --- a/docs/getting-started-guides/minikube.md +++ b/docs/getting-started-guides/minikube.md @@ -57,7 +57,7 @@ service "hello-minikube" exposed # To check whether the pod is up and running we can use the following: $ kubectl get pod NAME READY STATUS RESTARTS AGE -hello-minikube-3383150820-vctvh 1/1 ContainerCreating 0 3s +hello-minikube-3383150820-vctvh 0/1 ContainerCreating 0 3s # We can see that the pod is still being created from the ContainerCreating status $ kubectl get pod NAME READY STATUS RESTARTS AGE From 313f89c2d9d2b64b5584ae2882e2dc88be7aa1eb Mon Sep 17 00:00:00 2001 From: Sean Knox Date: Sun, 4 Mar 2018 12:24:51 -0800 Subject: [PATCH 085/117] (azure): change default location, add info (#7629) * (azure): change default location, add info * (ubuntu): fix CLI typo --- docs/getting-started-guides/ubuntu/installation.md | 8 +++++++- docs/getting-started-guides/ubuntu/troubleshooting.md | 2 +- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/docs/getting-started-guides/ubuntu/installation.md b/docs/getting-started-guides/ubuntu/installation.md index 85665e8597f53..521a130907a91 100644 --- a/docs/getting-started-guides/ubuntu/installation.md +++ b/docs/getting-started-guides/ubuntu/installation.md @@ -90,9 +90,15 @@ juju bootstrap aws/us-east-2 or, another example, this time on Azure: ``` -juju bootstrap azure/centralus +juju bootstrap azure/westus2 ``` +If you receive this error, it is likely that the default Azure VM size (Standard D1 v2 [1 vcpu, 3.5 GB memory]) is not available in the Azure location: +``` +ERROR failed to bootstrap model: instance provisioning failed (Failed) +``` + + You will need a controller node for each cloud or region you are deploying to. See the [controller documentation](https://jujucharms.com/docs/2.2/controllers) for more information. Note that each controller can host multiple Kubernetes clusters in a given cloud or region. diff --git a/docs/getting-started-guides/ubuntu/troubleshooting.md b/docs/getting-started-guides/ubuntu/troubleshooting.md index 134b6cd67263f..335cf23dd7b41 100644 --- a/docs/getting-started-guides/ubuntu/troubleshooting.md +++ b/docs/getting-started-guides/ubuntu/troubleshooting.md @@ -46,7 +46,7 @@ During normal operation the Workload should read `active`, the Agent column (whi Status can become unwieldy for large clusters, it is then recommended to check status on individual services, for example to check the status on the workers only: - juju status kubernetes-workers + juju status kubernetes-worker or just on the etcd cluster: From bc1b890f7441d515a06d247ea7cd52bbfe81b672 Mon Sep 17 00:00:00 2001 From: Tomoe Sugihara <468185+tomoe@users.noreply.github.com> Date: Mon, 5 Mar 2018 05:26:52 +0900 Subject: [PATCH 086/117] Add a note on availability of tcp_syncookies syctl (#7627) * Add a note on availability of tcp_syncookies syctl Sysctl `net.ipv4.tcp_syncookies` is not availalbe on 4.4 kernel as it's not namespaced yet. * updating to use {: .note} notation per https://kubernetes.io/docs/home/contribute/style-guide/#note --- docs/concepts/cluster-administration/sysctl-cluster.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/concepts/cluster-administration/sysctl-cluster.md b/docs/concepts/cluster-administration/sysctl-cluster.md index d79a6abd070c3..f7b715f6c3327 100644 --- a/docs/concepts/cluster-administration/sysctl-cluster.md +++ b/docs/concepts/cluster-administration/sysctl-cluster.md @@ -70,6 +70,9 @@ For Kubernetes 1.4, the following sysctls are supported in the _safe_ set: - `net.ipv4.ip_local_port_range`, - `net.ipv4.tcp_syncookies`. +**Note**: The example `net.ipv4.tcp_syncookies` is not namespaced on Linux kernel version 4.4 or lower. +{: .note} + This list will be extended in future Kubernetes versions when the kubelet supports better isolation mechanisms. From 4f7de6e5b5ac699dd8197f543b13878f5d83e05f Mon Sep 17 00:00:00 2001 From: Guang Ya Liu Date: Mon, 5 Mar 2018 04:27:52 +0800 Subject: [PATCH 087/117] Added TOC for HPA document for better reference. (#7624) --- .../run-application/horizontal-pod-autoscale-walkthrough.md | 3 +++ docs/tasks/run-application/horizontal-pod-autoscale.md | 3 +++ 2 files changed, 6 insertions(+) diff --git a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index b480283016389..8dc9a4cc814d8 100644 --- a/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -7,6 +7,9 @@ reviewers: title: Horizontal Pod Autoscaler Walkthrough --- +* TOC +{:toc} + Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or, with beta support, on some other, application-provided metrics). diff --git a/docs/tasks/run-application/horizontal-pod-autoscale.md b/docs/tasks/run-application/horizontal-pod-autoscale.md index 9c666a33e5546..2477e48fe1df7 100644 --- a/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -6,6 +6,9 @@ reviewers: title: Horizontal Pod Autoscaler --- +* TOC +{:toc} + This document describes the current state of the Horizontal Pod Autoscaler in Kubernetes. ## What is the Horizontal Pod Autoscaler? From 9aeb431032cd1d07fedef48e217f03e91f4ff48f Mon Sep 17 00:00:00 2001 From: Sean Knox Date: Sun, 4 Mar 2018 17:07:51 -0800 Subject: [PATCH 088/117] (ubuntu): fix broken etcd snapshot command (#7630) --- docs/getting-started-guides/ubuntu/backups.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/docs/getting-started-guides/ubuntu/backups.md b/docs/getting-started-guides/ubuntu/backups.md index b08f1710c99b6..e3e621b0b1091 100644 --- a/docs/getting-started-guides/ubuntu/backups.md +++ b/docs/getting-started-guides/ubuntu/backups.md @@ -21,10 +21,9 @@ The `snapshot` action of the etcd charm allows the operator to snapshot a running cluster's data for use in cloning, backing up, or migrating to a new cluster. - juju run-action etcd/0 snapshot target=/mnt/etcd-backups - -- **param** target: destination directory to save the resulting snapshot archive. + juju run-action etcd/0 snapshot +This will create a snapshot in `/home/ubuntu/etcd-snapshots` by default. ## Restore etcd data From f49c84067834601b02cbe91bd880ed8062c3a82e Mon Sep 17 00:00:00 2001 From: Calvin Hartwell Date: Mon, 5 Mar 2018 02:27:51 +0000 Subject: [PATCH 089/117] Added Rancher <-> Ubuntu k8s integration documentation (#7607) * fixed the interacting with cluster section for the ubuntu installation * made changes as per request from zacharysarah * added rancher stub * added rancher documentation * fixed missing capture tag * added ingress example, fixed languages as per changes in the PR * fixed page index as per pr * fixed rancher readme --- _data/setup.yml | 2 +- docs/getting-started-guides/ubuntu/index.md | 5 + .../ubuntu/installation.md | 2 +- docs/getting-started-guides/ubuntu/rancher.md | 360 ++++++++++++++++++ 4 files changed, 367 insertions(+), 2 deletions(-) create mode 100644 docs/getting-started-guides/ubuntu/rancher.md diff --git a/_data/setup.yml b/_data/setup.yml index 142e01fc161d1..60bc342b4078e 100644 --- a/_data/setup.yml +++ b/_data/setup.yml @@ -94,7 +94,7 @@ toc: - docs/getting-started-guides/ubuntu/glossary.md - docs/getting-started-guides/ubuntu/local.md - docs/getting-started-guides/ubuntu/logging.md - + - docs/getting-started-guides/ubuntu/rancher.md - docs/getting-started-guides/windows/index.md - docs/admin/node-conformance.md diff --git a/docs/getting-started-guides/ubuntu/index.md b/docs/getting-started-guides/ubuntu/index.md index b9d23187d0b42..a10f746099470 100644 --- a/docs/getting-started-guides/ubuntu/index.md +++ b/docs/getting-started-guides/ubuntu/index.md @@ -51,6 +51,11 @@ These are more in-depth guides for users choosing to run Kubernetes in productio - [Operational Considerations](/docs/getting-started-guides/ubuntu/operational-considerations/) - [Glossary](/docs/getting-started-guides/ubuntu/glossary/) + +## Third-party Product Integrations + + - [Rancher](/docs/getting-started-guides/ubuntu/rancher/) + ## Developer Guides - [Localhost using LXD](/docs/getting-started-guides/ubuntu/local/) diff --git a/docs/getting-started-guides/ubuntu/installation.md b/docs/getting-started-guides/ubuntu/installation.md index 521a130907a91..d908aed7c9cbd 100644 --- a/docs/getting-started-guides/ubuntu/installation.md +++ b/docs/getting-started-guides/ubuntu/installation.md @@ -175,7 +175,7 @@ mkdir -p ~/.kube Copy the kubeconfig file to the default location. ``` -sudo juju scp kubernetes-master/0:/home/ubuntu/config ~/.kube/config +juju scp kubernetes-master/0:/home/ubuntu/config ~/.kube/config ``` The next step is to install the kubectl client on your local machine. The recommended way to do this on Ubuntu is using the kubectl snap ([https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-snap-on-ubuntu](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-snap-on-ubuntu)). diff --git a/docs/getting-started-guides/ubuntu/rancher.md b/docs/getting-started-guides/ubuntu/rancher.md new file mode 100644 index 0000000000000..224ba49eb9533 --- /dev/null +++ b/docs/getting-started-guides/ubuntu/rancher.md @@ -0,0 +1,360 @@ +--- +title: Rancher Integration with Ubuntu Kubernetes +--- + +{% capture overview %} +This repository explains how to deploy Rancher 2.0alpha on Canonical Kubernetes. + +These steps are currently in alpha/testing phase and will most likely change. + +The original documentation for this integration can be found at [https://github.com/CalvinHartwell/canonical-kubernetes-rancher/](https://github.com/CalvinHartwell/canonical-kubernetes-rancher/). + +{% endcapture %} +{% capture prerequisites %} +To use this guide, you must have a working kubernetes cluster that was deployed using Canonical's juju. + +The full instructions for deploying Kubernetes with juju can be found at [https://kubernetes.io/docs/getting-started-guides/ubuntu/installation/](https://kubernetes.io/docs/getting-started-guides/ubuntu/installation/). +{% endcapture %} + + +{% capture steps %} +## Deploying Rancher + +To deploy Rancher, we just need to run the Rancher container workload on-top of Kubernetes. Rancher provides their containers through dockerhub ([https://hub.docker.com/r/rancher/server/tags/](https://hub.docker.com/r/rancher/server/tags/)) and can be downloaded freely from the internet. + +If you're running your own registry or have an offline deployment, the container should be downloaded and pushed to a private registry before proceeding. + +### Deploying Rancher with a nodeport + +First create a yaml file which defines how to deploy Rancher on kubernetes. Save the file as cdk-rancher-nodeport.yaml: + +``` + --- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: cluster-admin +subjects: + - kind: ServiceAccount + name: default + namespace: default +roleRef: + kind: ClusterRole + name: cluster-admin + apiGroup: rbac.authorization.k8s.io +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: cluster-admin +rules: +- apiGroups: + - '*' + resources: + - '*' + verbs: + - '*' +- nonResourceURLs: + - '*' + verbs: + - '*' +--- +apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + creationTimestamp: null + labels: + app: rancher + name: rancher +spec: + replicas: 1 + selector: + matchLabels: + app: rancher + strategy: {} + template: + metadata: + creationTimestamp: null + labels: + app: rancher + ima: pod + spec: + containers: + - image: rancher/server:preview + imagePullPolicy: Always + name: rancher + ports: + - containerPort: 80 + - containerPort: 443 + livenessProbe: + httpGet: + path: / + port: 80 + initialDelaySeconds: 5 + timeoutSeconds: 30 + resources: {} + restartPolicy: Always + serviceAccountName: "" +status: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: rancher + labels: + app: rancher +spec: + ports: + - port: 443 + protocol: TCP + targetPort: 443 + selector: + app: rancher +--- +apiVersion: v1 +kind: Service +metadata: + name: rancher-nodeport +spec: + type: NodePort + selector: + app: rancher + ports: + - name: rancher-api + protocol: TCP + nodePort: 30443 + port: 443 + targetPort: 443 +``` + +Once kubectl is running and working, run the following command to deploy Rancher: + +``` + kubectl apply -f cdk-rancher-nodeport.yaml +``` + +Now we need to open this nodeport so we can access it. For that, we can use juju. We need to run the open-port command for each of the worker nodes in our cluster. Inside the cdk-rancher-nodeport.yaml file, the nodeport has been set to 30443. Below shows how to open the port on each of the worker nodes: + +``` + # repeat this for each kubernetes worker in the cluster. + juju run --unit kubernetes-worker/0 "open-port 30443" + juju run --unit kubernetes-worker/1 "open-port 30443" + juju run --unit kubernetes-worker/2 "open-port 30443" +``` + +Rancher can now be accessed on this port through a worker IP or DNS entries if you have created them. It is generally recommended that you create a DNS entry for each of the worker nodes in your cluster. For example, if you have three worker nodes and you own the domain example.com, you could create three A records, one for each worker in the cluster. + +As creating DNS entries is outside of the scope of this document, we will use the freely available xip.io service which can return A records for an IP address which is part of the domain name. For example, if you have the domain rancher.35.178.130.245.xip.io, the xip.io service will automatically return the IP address 35.178.130.245 as an A record which is useful for testing purposes. For your deployment, the IP address 35.178.130.245 should be replaced with one of your worker IP address, which can be found using Juju or AWS: + +``` + calvinh@ubuntu-ws:~/Source/cdk-rancher$ juju status + +# ... output omitted. + +Unit Workload Agent Machine Public address Ports Message +easyrsa/0* active idle 0 35.178.118.232 Certificate Authority connected. +etcd/0* active idle 1 35.178.49.31 2379/tcp Healthy with 3 known peers +etcd/1 active idle 2 35.177.99.171 2379/tcp Healthy with 3 known peers +etcd/2 active idle 3 35.178.125.161 2379/tcp Healthy with 3 known peers +kubeapi-load-balancer/0* active idle 4 35.178.37.87 443/tcp Loadbalancer ready. +kubernetes-master/0* active idle 5 35.177.239.237 6443/tcp Kubernetes master running. + flannel/0* active idle 35.177.239.237 Flannel subnet 10.1.27.1/24 +kubernetes-worker/0* active idle 6 35.178.130.245 80/tcp,443/tcp,30443/tcp Kubernetes worker running. + flannel/2 active idle 35.178.130.245 Flannel subnet 10.1.82.1/24 +kubernetes-worker/1 active idle 7 35.178.121.29 80/tcp,443/tcp,30443/tcp Kubernetes worker running. + flannel/3 active idle 35.178.121.29 Flannel subnet 10.1.66.1/24 +kubernetes-worker/2 active idle 8 35.177.144.76 80/tcp,443/tcp,30443/tcp Kubernetes worker running. + flannel/1 active idle 35.177.144.76 + +# Note the IP addresses for the kubernetes-workers in the example above. You should pick one of the public addresses. +``` + +Try opening up Rancher in your browser using the nodeport and the domain name or ip address: + +``` + # replace the IP address with one of your Kubernetes worker, find this from juju status command. + wget https://35.178.130.245.xip.io:30443 --no-check-certificate + + # this should also work + wget https://35.178.130.245:30443 --no-check-certificate +``` + +If you need to make any changes to the kubernetes configuration file, edit the yaml file and then just use apply again: + +``` + kubectl apply -f cdk-rancher-nodeport.yaml +``` + +### Deploying Rancher with an ingress rule + +It is also possible to deploy Rancher using an ingress rule. This has the added benefit of not requiring additional ports to be opened up on the Kubernetes cluster. First create a yaml file to describe the deployment called cdk-rancher-ingress.yaml which should contain the following: + +``` +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: cluster-admin +subjects: + - kind: ServiceAccount + name: default + namespace: default +roleRef: + kind: ClusterRole + name: cluster-admin + apiGroup: rbac.authorization.k8s.io +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: cluster-admin +rules: +- apiGroups: + - '*' + resources: + - '*' + verbs: + - '*' +- nonResourceURLs: + - '*' + verbs: + - '*' +--- +apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + creationTimestamp: null + labels: + app: rancher + name: rancher +spec: + replicas: 1 + selector: + matchLabels: + app: rancher + strategy: {} + template: + metadata: + creationTimestamp: null + labels: + app: rancher + spec: + containers: + - image: rancher/server:preview + imagePullPolicy: Always + name: rancher + ports: + - containerPort: 443 + livenessProbe: + httpGet: + path: / + port: 80 + initialDelaySeconds: 5 + timeoutSeconds: 30 + resources: {} + restartPolicy: Always + serviceAccountName: "" +status: {} +--- +apiVersion: v1 +kind: Service +metadata: + name: rancher + labels: + app: rancher +spec: + ports: + - port: 443 + targetPort: 443 + protocol: TCP + selector: + app: rancher +--- +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: rancher + annotations: + kubernetes.io/tls-acme: "true" + ingress.kubernetes.io/secure-backends: "true" +spec: + tls: + - hosts: + - rancher.34.244.118.135.xip.io + rules: + - host: rancher.34.244.118.135.xip.io + http: + paths: + - path: / + backend: + serviceName: rancher + servicePort: 443 +``` + +It is generally recommended that you create a DNS entry for each of the worker nodes in your cluster. For example, if you have three worker nodes and you own the domain example.com, you could create three A records, one for each worker in the cluster. + +As creating DNS entries is outside of the scope of this tutorial, we will use the freely available xip.io service which can return A records for an IP address which is part of the domain name. For example, if you have the domain rancher.35.178.130.245.xip.io, the xip.io service will automatically return the IP address 35.178.130.245 as an A record which is useful for testing purposes. + +For your deployment, the IP address 35.178.130.245 should be replaced with one of your worker IP address, which can be found using Juju or AWS: + +``` + calvinh@ubuntu-ws:~/Source/cdk-rancher$ juju status + +# ... output omitted. + +Unit Workload Agent Machine Public address Ports Message +easyrsa/0* active idle 0 35.178.118.232 Certificate Authority connected. +etcd/0* active idle 1 35.178.49.31 2379/tcp Healthy with 3 known peers +etcd/1 active idle 2 35.177.99.171 2379/tcp Healthy with 3 known peers +etcd/2 active idle 3 35.178.125.161 2379/tcp Healthy with 3 known peers +kubeapi-load-balancer/0* active idle 4 35.178.37.87 443/tcp Loadbalancer ready. +kubernetes-master/0* active idle 5 35.177.239.237 6443/tcp Kubernetes master running. + flannel/0* active idle 35.177.239.237 Flannel subnet 10.1.27.1/24 +kubernetes-worker/0* active idle 6 35.178.130.245 80/tcp,443/tcp,30443/tcp Kubernetes worker running. + flannel/2 active idle 35.178.130.245 Flannel subnet 10.1.82.1/24 +kubernetes-worker/1 active idle 7 35.178.121.29 80/tcp,443/tcp,30443/tcp Kubernetes worker running. + flannel/3 active idle 35.178.121.29 Flannel subnet 10.1.66.1/24 +kubernetes-worker/2 active idle 8 35.177.144.76 80/tcp,443/tcp,30443/tcp Kubernetes worker running. + flannel/1 active idle 35.177.144.76 + +# Note the IP addresses for the kubernetes-workers in the example above. You should pick one of the public addresses. +``` + +Looking at the output from the juju status above, the Public Address (35.178.130.245) can be used to create a xip.io DNS entry (rancher.35.178.130.245.xip.io) which should be placed into the cdk-rancher-ingress.yaml file. You could also create your own DNS entry as long as it resolves to each of the worker nodes or one of them it will work fine: + +``` + # The xip.io domain should appear in two places in the file, change both entries. + cat cdk-rancher-ingress.yaml | grep xip.io + - host: rancher.35.178.130.245.xip.io +``` + +Once you've edited the ingress rule to reflect your DNS entries, run the kubectl apply -f cdk-rancher-ingress.yaml to deploy Kubernetes: + +``` + kubectl apply -f cdk-rancher-ingress.yaml +``` + +Rancher can now be accessed on the regular 443 through a worker IP or DNS entries if you have created them. Try opening it up in your browser: + +``` + # replace the IP address with one of your Kubernetes worker, find this from juju status command. + wget https://35.178.130.245.xip.io:443 --no-check-certificate +``` + +If you need to make any changes to the kubernetes configuration file, edit the yaml file and then just use apply again: + +``` + kubectl apply -f cdk-rancher-ingress.yaml +``` + +### Removing Rancher + +You can remove Rancher from your cluster using kubectl. Deleting constructs in Kubernetes is as simple as creating them: + +``` + # If you used the nodeport example change the yaml filename if you used the ingress example. + kubectl delete -f cdk-rancher-nodeport.yaml +``` +{% endcapture %} + +{% include templates/task.md %} From 2b7f83ffce781073ea329c0220dba3d30ade590b Mon Sep 17 00:00:00 2001 From: Aravind Date: Mon, 5 Mar 2018 12:40:51 +0530 Subject: [PATCH 090/117] Fixed k8s.io/docs/getting-started-guides/fedora/fedora_manual_config, according to #7417 (#7626) --- .../fedora/fedora_manual_config.md | 21 +++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/docs/getting-started-guides/fedora/fedora_manual_config.md b/docs/getting-started-guides/fedora/fedora_manual_config.md index d5d6e324ce9c6..94c1fc1cc44cb 100644 --- a/docs/getting-started-guides/fedora/fedora_manual_config.md +++ b/docs/getting-started-guides/fedora/fedora_manual_config.md @@ -121,10 +121,27 @@ KUBELET_ADDRESS="--address=0.0.0.0" KUBELET_HOSTNAME="--hostname-override=fed-node" # location of the api-server -KUBELET_API_SERVER="--api-servers=http://fed-master:8080" +KUBELET_ARGS="--cgroup-driver=systemd --kubeconfig=/etc/kubernetes/master-kubeconfig.yaml --require-kubeconfig" # Add your own! -#KUBELET_ARGS="" +KUBELET_ARGS="" + +``` + +```yaml +kind: Config +clusters: +- name: local + cluster: + server: http://fed-master:8080 +users: +- name: kubelet +contexts: +- context: + cluster: local + user: kubelet + name: kubelet-context +current-context: kubelet-context ``` * Start the appropriate services on the node (fed-node). From 7cb230f2a5c6cc5e85a453661418abe3b356cd62 Mon Sep 17 00:00:00 2001 From: Stewart-YU Date: Mon, 5 Mar 2018 16:43:51 +0800 Subject: [PATCH 091/117] refactor setting up cluster using kubeadm docs (#7104) --- docs/setup/independent/install-kubeadm.md | 48 ++++++++++++------- .../independent/troubleshooting-kubeadm.md | 45 ++++------------- 2 files changed, 39 insertions(+), 54 deletions(-) diff --git a/docs/setup/independent/install-kubeadm.md b/docs/setup/independent/install-kubeadm.md index 08920178db72e..74947cbaa1b71 100644 --- a/docs/setup/independent/install-kubeadm.md +++ b/docs/setup/independent/install-kubeadm.md @@ -23,8 +23,8 @@ see the [Using kubeadm to Create a Cluster](/docs/setup/independent/create-clust * 2 GB or more of RAM per machine (any less will leave little room for your apps) * 2 CPUs or more * Full network connectivity between all machines in the cluster (public or private network is fine) -* Unique hostname, MAC address, and product_uuid for every node -* Certain ports are open on your machines. See the section below for more details +* Unique hostname, MAC address, and product_uuid for every node. See [here](https://kubernetes.io/docs/setup/independent/install-kubeadm/#verify-the-mac-address-and-product_uuid-are-unique-for-every-node) for more details. +* Certain ports are open on your machines. See [here](/docs/setup/indenpendent/install-kubeadm/#check-required-ports) for more details. * Swap disabled. You **MUST** disable swap in order for the kubelet to work properly. {% endcapture %} @@ -39,7 +39,7 @@ see the [Using kubeadm to Create a Cluster](/docs/setup/independent/create-clust It is very likely that hardware devices will have unique addresses, although some virtual machines may have identical values. Kubernetes uses these values to uniquely identify the nodes in the cluster. If these values are not unique to each node, the installation process -[may fail](https://github.com/kubernetes/kubeadm/issues/31). +may [fail](https://github.com/kubernetes/kubeadm/issues/31). ## Check network adapters @@ -87,7 +87,8 @@ Versions 17.06+ _might work_, but have not yet been tested and verified by the K Please proceed with executing the following commands based on your OS as root. You may become the root user by executing `sudo -i` after SSH-ing to each host. -You can use the following commands to install Docker on your system: +If you already have the required versions of the Docker installed, you can move on to next section. +If not, you can use the following commands to install Docker on your system: {% capture docker_ubuntu %} @@ -138,20 +139,6 @@ systemctl enable docker && systemctl start docker {% endcapture %} -**Note**: Make sure that the cgroup driver used by kubelet is the same as the one used by -Docker. To ensure compatibility you can either update Docker, like so: - -```bash -cat << EOF > /etc/docker/daemon.json -{ - "exec-opts": ["native.cgroupdriver=systemd"] -} -EOF -``` - -and restart Docker. Or ensure the `--cgroup-driver` kubelet flag is set to the same value -as Docker (e.g. `cgroupfs`). - {% assign tab_set_name = "docker_install" %} {% assign tab_names = "Ubuntu, Debian or HypriotOS;CentOS, RHEL or Fedora; Container Linux" | split: ';' | compact %} {% assign tab_contents = site.emptyArray | push: docker_ubuntu | push: docker_centos | push: docker_coreos %} @@ -273,6 +260,31 @@ systemctl enable kubelet && systemctl start kubelet The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do. +## Configure cgroup driver used by kubelet on Master Node + +Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. Verify that your Docker cgroup driver matches the kubelet config: + +```bash +docker info | grep -i cgroup +cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +``` + +If the Docker cgroup driver and the kubelet config don't match, change the kubelet config to match the Docker cgroup driver. The +flag you need to change is `--cgroup-driver`. If it's already set, you can update like so: + +```bash +sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +``` + +Otherwise, you will need to open the systemd file and add the flag to an existing environment line. + +Then restart kubelet: + +```bash +systemctl daemon-reload +systemctl restart kubelet +``` + ## Troubleshooting If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/independent/troubleshooting-kubeadm/). diff --git a/docs/setup/independent/troubleshooting-kubeadm.md b/docs/setup/independent/troubleshooting-kubeadm.md index 590dc3e1232f4..78d07353edf93 100644 --- a/docs/setup/independent/troubleshooting-kubeadm.md +++ b/docs/setup/independent/troubleshooting-kubeadm.md @@ -23,7 +23,7 @@ If your cluster is in an error state, you may have trouble in the configuration {% endcapture %} -#### `ebtables` or executable not found during installation +#### `ebtables` or some similar executable not found during installation If you see the following warnings while running `kubeadm init` @@ -61,8 +61,15 @@ This may be caused by a number of problems. The most common are: 1. Install docker again following instructions [here](/docs/setup/independent/install-kubeadm/#installing-docker). 1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to - [Errors on CentOS when setting up masters](#errors-on-centos-when-setting-up-masters) + [Configure cgroup driver used by kubelet on Master Node](/docs/setup/independent/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node) for detailed instructions. + The `kubectl describe pod` or `kubectl logs` commands can help you diagnose errors. For example: + +```bash +kubectl -n ${NAMESPACE} describe pod ${POD_NAME} + +kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME} +``` - control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`. @@ -134,40 +141,6 @@ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` -#### Errors on CentOS when setting up masters - -If you are using CentOS and encounter difficulty while setting up the master node, -verify that your Docker cgroup driver matches the kubelet config: - -```bash -docker info | grep -i cgroup -cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -``` - -If the Docker cgroup driver and the kubelet config don't match, change the kubelet config to match the Docker cgroup driver. The -flag you need to change is `--cgroup-driver`. If it's already set, you can update like so: - -```bash -sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -``` - -Otherwise, you will need to open the systemd file and add the flag to an existing environment line. - -Then restart kubelet: - -```bash -systemctl daemon-reload -systemctl restart kubelet -``` - -The `kubectl describe pod` or `kubectl logs` commands can help you diagnose errors. For example: - -```bash -kubectl -n ${NAMESPACE} describe pod ${POD_NAME} - -kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME} -``` - ### Default NIC When using flannel as the pod network in Vagrant The following error might indicate that something was wrong in the pod network: From 07d38fd48508939c72c243ce59b1c8f98f65134b Mon Sep 17 00:00:00 2001 From: Paul Michali Date: Mon, 5 Mar 2018 09:56:53 -0500 Subject: [PATCH 092/117] IPv6: Updating doc related to IPv6. (#7606) Made updates for files, related to IPv6. Note: The release notes cover 1.9. When 1.10 release notes are added, the comment about the /66 restriction for IPv6 can be removed. --- docs/concepts/services-networking/service.md | 2 +- docs/getting-started-guides/scratch.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/concepts/services-networking/service.md b/docs/concepts/services-networking/service.md index c4ceadab4115a..942df4210cb01 100644 --- a/docs/concepts/services-networking/service.md +++ b/docs/concepts/services-networking/service.md @@ -732,7 +732,7 @@ groups are modified with the following IP rules: Be aware that if `spec.loadBalancerSourceRanges` is not set, Kubernetes will allow traffic from `0.0.0.0/0` to the Node Security Group(s). If nodes have public IP addresses, be aware that non-NLB traffic can also reach all instances -in those modified security groups. IPv6 is not yet supported for source ranges. +in those modified security groups. In order to limit which client IP's can access the Network Load Balancer, specify `loadBalancerSourceRanges`. diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index 04aad31e76c67..b9b733c4b8f71 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -96,7 +96,7 @@ to implement one of the above options: - This can be done by manually running commands, or through a set of externally maintained scripts. - You have to implement this yourself, but it can give you an extra degree of flexibility. -You will need to select an address range for the Pod IPs. Note that IPv6 is not yet supported for Pod IPs. +You will need to select an address range for the Pod IPs. - Various approaches: - GCE: each project has its own `10.0.0.0/8`. Carve off a `/16` for each From 09f67a02b08d7be2c9e133e8668df84b1db1f30b Mon Sep 17 00:00:00 2001 From: AdamDang Date: Mon, 5 Mar 2018 23:33:51 +0800 Subject: [PATCH 093/117] Typo fix some "pod to schedule on"->"pod to be scheduled on" (#7637) Pod is the object to be scheduled, So "pod to schedule onto a node " and "pod to schedule on" are not suitable in the doc. "pod to be scheduled onto a node" is better. --- docs/concepts/configuration/assign-pod-node.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/concepts/configuration/assign-pod-node.md b/docs/concepts/configuration/assign-pod-node.md index b1291efc4508c..a99896a2aced2 100644 --- a/docs/concepts/configuration/assign-pod-node.md +++ b/docs/concepts/configuration/assign-pod-node.md @@ -100,11 +100,11 @@ everything that `nodeSelector` can express. Node affinity was introduced as alpha in Kubernetes 1.2. Node affinity is conceptually similar to `nodeSelector` -- it allows you to constrain which nodes your -pod is eligible to schedule on, based on labels on the node. +pod is eligible to be scheduled on, based on labels on the node. There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and `preferredDuringSchedulingIgnoredDuringExecution`. You can think of them as "hard" and "soft" respectively, -in the sense that the former specifies rules that *must* be met for a pod to schedule onto a node (just like +in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (just like `nodeSelector` but using a more expressive syntax), while the latter specifies *preferences* that the scheduler will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar to how `nodeSelector` works, if labels on a node change at runtime such that the affinity rules on a pod are no longer @@ -177,14 +177,14 @@ And inter-pod anti-affinity is specified as field `podAntiAffinity` of field `af The affinity on this pod defines one pod affinity rule and one pod anti-affinity rule. In this example, the `podAffinity` is `requiredDuringSchedulingIgnoredDuringExecution` while the `podAntiAffinity` is `preferredDuringSchedulingIgnoredDuringExecution`. The -pod affinity rule says that the pod can schedule onto a node only if that node is in the same zone +pod affinity rule says that the pod can be scheduled onto a node only if that node is in the same zone as at least one already-running pod that has a label with key "security" and value "S1". (More precisely, the pod is eligible to run on node N if node N has a label with key `failure-domain.beta.kubernetes.io/zone` and some value V such that there is at least one node in the cluster with key `failure-domain.beta.kubernetes.io/zone` and value V that is running a pod that has a label with key "security" and value "S1".) The pod anti-affinity -rule says that the pod prefers to not schedule onto a node if that node is already running a pod with label +rule says that the pod prefers not to be scheduled onto a node if that node is already running a pod with label having key "security" and value "S2". (If the `topologyKey` were `failure-domain.beta.kubernetes.io/zone` then -it would mean that the pod cannot schedule onto a node if that node is in the same zone as a pod with +it would mean that the pod cannot be scheduled onto a node if that node is in the same zone as a pod with label having key "security" and value "S2".) See the [design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md). For many more examples of pod affinity and anti-affinity, both the `requiredDuringSchedulingIgnoredDuringExecution` flavor and the `preferredDuringSchedulingIgnoredDuringExecution` flavor. @@ -206,7 +206,7 @@ If omitted, it defaults to the namespace of the pod where the affinity/anti-affi If defined but empty, it means "all namespaces." All `matchExpressions` associated with `requiredDuringSchedulingIgnoredDuringExecution` affinity and anti-affinity -must be satisfied for the pod to schedule onto a node. +must be satisfied for the pod to be scheduled onto a node. #### More Practical Use-cases From 96f76d8c43da1084e5ab46d2d805b4a60e5f4a3b Mon Sep 17 00:00:00 2001 From: Joseph Heck Date: Mon, 5 Mar 2018 07:42:52 -0800 Subject: [PATCH 094/117] adding PKS solution description and link (#7608) --- docs/setup/pick-right-solution.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/setup/pick-right-solution.md b/docs/setup/pick-right-solution.md index 10b69f75ec1e0..0dbab7d325003 100644 --- a/docs/setup/pick-right-solution.md +++ b/docs/setup/pick-right-solution.md @@ -49,7 +49,7 @@ a Kubernetes cluster from scratch. * [Madcore.Ai](https://madcore.ai) is devops-focused CLI tool for deploying Kubernetes infrastructure in AWS. Master, auto-scaling group nodes with spot-instances, ingress-ssl-lego, Heapster, and Grafana. -* [Platform9](https://platform9.com/products/kubernetes/) offers managed Kubernetes on-premises or on any public cloud, and provides 24/7 health monitoring and alerting. (Kube2go, a web-UI driven Kubernetes cluster deployment service Platform9 released, has been integrated to Platform9 Sandbox.) +* [Platform9](https://platform9.com/products/kubernetes/) offers managed Kubernetes on-premises or on any public cloud, and provides 24/7 health monitoring and alerting. (Kube2go, a web-UI driven Kubernetes cluster deployment service Platform9 released, has been integrated to Platform9 Sandbox.) * [OpenShift Dedicated](https://www.openshift.com/dedicated/) offers managed Kubernetes clusters powered by OpenShift. @@ -61,6 +61,8 @@ a Kubernetes cluster from scratch. * [Kubermatic](https://www.loodse.com) provides managed Kubernetes clusters for various public clouds, including AWS and Digital Ocean, as well as on-premises with OpenStack integration. +* [Pivotal Container Services](https://pivotal.io/platform/pivotal-container-service) provides enterprise-grade Kubernetes for both on-premises and public clouds. PKS enables on-demand provisioning of Kubernetes clusters, multi-tenancy and fully automated day-2 operations. +` # Turnkey Cloud Solutions These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a From 6b865291830f00fc79ae807de8c0786203b619ff Mon Sep 17 00:00:00 2001 From: Joseph Heck Date: Mon, 5 Mar 2018 13:46:50 -0800 Subject: [PATCH 095/117] erp, typo'd myself (#7645) --- docs/setup/pick-right-solution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/setup/pick-right-solution.md b/docs/setup/pick-right-solution.md index 0dbab7d325003..bae8cd08af9d3 100644 --- a/docs/setup/pick-right-solution.md +++ b/docs/setup/pick-right-solution.md @@ -61,7 +61,7 @@ a Kubernetes cluster from scratch. * [Kubermatic](https://www.loodse.com) provides managed Kubernetes clusters for various public clouds, including AWS and Digital Ocean, as well as on-premises with OpenStack integration. -* [Pivotal Container Services](https://pivotal.io/platform/pivotal-container-service) provides enterprise-grade Kubernetes for both on-premises and public clouds. PKS enables on-demand provisioning of Kubernetes clusters, multi-tenancy and fully automated day-2 operations. +* [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service) provides enterprise-grade Kubernetes for both on-premises and public clouds. PKS enables on-demand provisioning of Kubernetes clusters, multi-tenancy and fully automated day-2 operations. ` # Turnkey Cloud Solutions From b598d07d43ef3784063c261d104f5c8cead90def Mon Sep 17 00:00:00 2001 From: Ye Yin Date: Tue, 6 Mar 2018 09:42:53 +0800 Subject: [PATCH 096/117] Add RDMA device plugin (#7605) --- docs/concepts/cluster-administration/device-plugins.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/concepts/cluster-administration/device-plugins.md b/docs/concepts/cluster-administration/device-plugins.md index 432194ec2f15a..ac755e15fcadf 100644 --- a/docs/concepts/cluster-administration/device-plugins.md +++ b/docs/concepts/cluster-administration/device-plugins.md @@ -137,6 +137,7 @@ For examples of device plugin implementations, see: * The official [NVIDIA GPU device plugin](https://github.com/NVIDIA/k8s-device-plugin) * it requires using [nvidia-docker 2.0](https://github.com/NVIDIA/nvidia-docker) which allows you to run GPU enabled docker containers * The [NVIDIA GPU device plugin for COS base OS](https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu). +* The [RDMA device plugin](https://github.com/hustcat/k8s-rdma-device-plugin) {% endcapture %} From 78391515dae93ce085e6df9b588bb6318aab5e34 Mon Sep 17 00:00:00 2001 From: WanLinghao Date: Wed, 7 Mar 2018 00:13:53 +0800 Subject: [PATCH 097/117] fix sysctl miss in podsecuritypolicy descriptions. (#7600) modified: docs/concepts/cluster-administration/sysctl-cluster.md modified: docs/concepts/policy/pod-security-policy.md --- .../cluster-administration/sysctl-cluster.md | 19 +++++++++++++++++++ docs/concepts/policy/pod-security-policy.md | 6 ++++++ 2 files changed, 25 insertions(+) diff --git a/docs/concepts/cluster-administration/sysctl-cluster.md b/docs/concepts/cluster-administration/sysctl-cluster.md index f7b715f6c3327..6fa32786cb6a7 100644 --- a/docs/concepts/cluster-administration/sysctl-cluster.md +++ b/docs/concepts/cluster-administration/sysctl-cluster.md @@ -127,3 +127,22 @@ any node which has not enabled those two _unsafe_ sysctls explicitly. As with _node-level_ sysctls it is recommended to use [_taints and toleration_ feature](/docs/user-guide/kubectl/{{page.version}}/#taint) or [taints on nodes](/docs/concepts/configuration/taint-and-toleration/) to schedule those pods onto the right nodes. + +## PodSecurityPolicy Annotations + +The use of sysctl in pods can be controlled via annotations on the PodSecurityPolicy. + +Here is an example, it authorizes binding user creating pod with corresponding +_safe_ and _unsafe_ sysctls. + +```yaml +apiVersion: extensions/v1beta1 +kind: PodSecurityPolicy +metadata: + name: sysctl-psp + annotations: + security.alpha.kubernetes.io/sysctls: 'kernel.shm_rmid_forced' + security.alpha.kubernetes.io/unsafe-sysctls: 'net.ipv4.route.*,kernel.msg*' +spec: + ... +``` diff --git a/docs/concepts/policy/pod-security-policy.md b/docs/concepts/policy/pod-security-policy.md index 3bd22cf3d1db5..2c607231f4321 100644 --- a/docs/concepts/policy/pod-security-policy.md +++ b/docs/concepts/policy/pod-security-policy.md @@ -37,6 +37,7 @@ administrator to control the following: | The SELinux context of the container | [`seLinux`](#selinux) | | The AppArmor profile used by containers | [annotations](#apparmor) | | The seccomp profile used by containers | [annotations](#seccomp) | +| The sysctl profile used by containers | [annotations](#sysctl) | ## Enabling Pod Security Policies @@ -554,3 +555,8 @@ specifies which values are allowed for the pod seccomp annotations. Specified as a comma-delimited list of allowed values. Possible values are those listed above, plus `*` to allow all profiles. Absence of this annotation means that the default cannot be changed. + +### Sysctl + +Controlled via annotations on the PodSecurityPolicy. Refer to the [Sysctl documentation]( +/docs/concepts/cluster-administration/sysctl-cluster/#podsecuritypolicy-annotations). From 51e588c629817a25f562d0fc99ea8fbf383c0477 Mon Sep 17 00:00:00 2001 From: Andrew Chen Date: Tue, 6 Mar 2018 10:14:51 -0800 Subject: [PATCH 098/117] Remove/hide cluster operator advanced user journey (+1 squashed commit) (#7643) Squashed commits: [9ee8f2a] Remove/hide cluster operator advanced user journey --- _data/setup.yml | 1 - .../users/cluster-operator/{advanced.md => _advanced.md} | 0 skip_toc_check.txt | 1 + 3 files changed, 1 insertion(+), 1 deletion(-) rename docs/user-journeys/users/cluster-operator/{advanced.md => _advanced.md} (100%) diff --git a/_data/setup.yml b/_data/setup.yml index 60bc342b4078e..bca31adfa5aa5 100644 --- a/_data/setup.yml +++ b/_data/setup.yml @@ -112,7 +112,6 @@ toc: section: - docs/user-journeys/users/cluster-operator/foundational.md - docs/user-journeys/users/cluster-operator/intermediate.md - - docs/user-journeys/users/cluster-operator/advanced.md - title: Docs Contributor path: /docs/home/?path=contributors&persona=docs-contributor&level=foundational diff --git a/docs/user-journeys/users/cluster-operator/advanced.md b/docs/user-journeys/users/cluster-operator/_advanced.md similarity index 100% rename from docs/user-journeys/users/cluster-operator/advanced.md rename to docs/user-journeys/users/cluster-operator/_advanced.md diff --git a/skip_toc_check.txt b/skip_toc_check.txt index dab2f8437930b..b8cd18d719c44 100644 --- a/skip_toc_check.txt +++ b/skip_toc_check.txt @@ -67,3 +67,4 @@ docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade.md docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_apply.md docs/reference/setup-tools/kubeadm/generated/kubeadm_upgrade_plan.md docs/reference/setup-tools/kubeadm/generated/kubeadm_version.md +docs/user-journeys/users/cluster-operator/_advanced.md From 9f2499294c548018135fc641be353cc569ffe372 Mon Sep 17 00:00:00 2001 From: Dusan Susic Date: Tue, 6 Mar 2018 19:23:52 +0100 Subject: [PATCH 099/117] Correct naming (#7656) it's confusing to have etcd1 twice, rename it to etcd0, etcd1, etcd2. --- docs/setup/independent/high-availability.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/setup/independent/high-availability.md b/docs/setup/independent/high-availability.md index 1d5df35ccf73a..e651a6e98ca71 100644 --- a/docs/setup/independent/high-availability.md +++ b/docs/setup/independent/high-availability.md @@ -331,7 +331,7 @@ Please select one of the tabs to see installation instructions for the respectiv - --peer-key-file=/certs/peer-key.pem \ - --peer-client-cert-auth \ - --peer-trusted-ca-file=/certs/ca.pem \ - - --initial-cluster etcd0=https://:2380,etcd1=https://:2380,etcd1=https://:2380 \ + - --initial-cluster etcd0=https://:2380,etcd1=https://:2380,etcd2=https://:2380 \ - --initial-cluster-token my-etcd-token \ - --initial-cluster-state new image: gcr.io/google_containers/etcd-amd64:3.1.0 From f9c7ee03c8fc602794491bb45e53f6d444a46c1f Mon Sep 17 00:00:00 2001 From: Joseph Heck Date: Tue, 6 Mar 2018 17:32:52 -0800 Subject: [PATCH 100/117] Adding missing quote to command line example (#7660) Fixes #7658 --- docs/setup/independent/install-kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/setup/independent/install-kubeadm.md b/docs/setup/independent/install-kubeadm.md index 74947cbaa1b71..6e97f37d67e02 100644 --- a/docs/setup/independent/install-kubeadm.md +++ b/docs/setup/independent/install-kubeadm.md @@ -273,7 +273,7 @@ If the Docker cgroup driver and the kubelet config don't match, change the kubel flag you need to change is `--cgroup-driver`. If it's already set, you can update like so: ```bash -sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ``` Otherwise, you will need to open the systemd file and add the flag to an existing environment line. From 193283af1faf60258020d7ced29fb49e9048c71b Mon Sep 17 00:00:00 2001 From: Kai Chen Date: Tue, 6 Mar 2018 17:43:52 -0800 Subject: [PATCH 101/117] Remove reference to OVS as a called-out option for on-prem networking model (#7665) --- .../cluster-administration/cluster-administration-overview.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/cluster-administration/cluster-administration-overview.md b/docs/concepts/cluster-administration/cluster-administration-overview.md index 0d176a210f662..f6b183a14cd67 100644 --- a/docs/concepts/cluster-administration/cluster-administration-overview.md +++ b/docs/concepts/cluster-administration/cluster-administration-overview.md @@ -21,7 +21,7 @@ Before choosing a guide, here are some considerations: - **If you are designing for high-availability**, learn about configuring [clusters in multiple zones](/docs/concepts/cluster-administration/federation/). - Will you be using **a hosted Kubernetes cluster**, such as [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/), or **hosting your own cluster**? - Will your cluster be **on-premises**, or **in the cloud (IaaS)**? Kubernetes does not directly support hybrid clusters. Instead, you can set up multiple clusters. - - **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best. One option for custom networking is [*OpenVSwitch GRE/VxLAN networking*](/docs/admin/ovs-networking/), which uses OpenVSwitch to set up networking between pods across Kubernetes nodes. + - **If you are configuring Kubernetes on-premises**, consider which [networking model](/docs/concepts/cluster-administration/networking/) fits best. - Will you be running Kubernetes on **"bare metal" hardware** or on **virtual machines (VMs)**? - Do you **just want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the latter, choose an actively-developed distro. Some distros only use binary releases, but From 41c9f9aa8fc448b0fe0e84e2ea525a745af3b6f0 Mon Sep 17 00:00:00 2001 From: Venil Noronha Date: Tue, 6 Mar 2018 21:26:53 -0800 Subject: [PATCH 102/117] Updates documentation pertaining to VMware (#7661) * Adds 301 redirection for /docs/getting-started-guides/vsphere/ to https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/ * Updates hyperlinks from /docs/getting-started-guides/vsphere/ to https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/ * Updates solution table entries for VMware vSphere * Fixes typos i.e. VMWare/Vmware -> VMware --- _data/setup.yml | 3 +- .../migrators/vmware-openstack.yaml | 2 +- _redirects | 1 + docs/concepts/storage/volumes.md | 2 +- docs/getting-started-guides/minikube.md | 2 +- docs/getting-started-guides/ubuntu/index.md | 2 +- .../ubuntu/installation.md | 2 +- docs/getting-started-guides/vsphere.md | 211 ------------------ docs/getting-started-guides/windows/index.md | 2 +- docs/setup/pick-right-solution.md | 10 +- 10 files changed, 14 insertions(+), 223 deletions(-) delete mode 100644 docs/getting-started-guides/vsphere.md diff --git a/_data/setup.yml b/_data/setup.yml index bca31adfa5aa5..971c5a4070ce9 100644 --- a/_data/setup.yml +++ b/_data/setup.yml @@ -64,7 +64,8 @@ toc: section: - docs/getting-started-guides/coreos/index.md - docs/getting-started-guides/cloudstack.md - - docs/getting-started-guides/vsphere.md + - title: VMware vSphere + path: https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/ - docs/getting-started-guides/dcos.md - docs/getting-started-guides/ovirt.md diff --git a/_data/user-personas/migrators/vmware-openstack.yaml b/_data/user-personas/migrators/vmware-openstack.yaml index a4aea1a4bef18..45a889cd5229e 100644 --- a/_data/user-personas/migrators/vmware-openstack.yaml +++ b/_data/user-personas/migrators/vmware-openstack.yaml @@ -1,5 +1,5 @@ id: vmware-openstack -name: Migrating from VMWare and/or OpenStack +name: Migrating from VMware and/or OpenStack index: 0 foundational: - label: "a1: foundational stuff" diff --git a/_redirects b/_redirects index 43ed2beb92b0f..3d5c08fe03db3 100644 --- a/_redirects +++ b/_redirects @@ -176,6 +176,7 @@ /docs/getting-started-guides/ubuntu/automated/ /docs/getting-started-guides/ubuntu/ 301 /docs/getting-started-guides/ubuntu/calico/ /docs/getting-started-guides/ubuntu/ 301 /docs/getting-started-guides/vagrant/ /docs/getting-started-guides/alternatives/ 301 +/docs/getting-started-guides/vsphere/ https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/ 301 /docs/getting-started-guides/windows/While/ /docs/getting-started-guides/windows/ 301 /docs/getting-started-guides/centos/* /docs/setup/independent/create-cluster-kubeadm/ 301 diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index d298cfbd9a539..a805ce9428af7 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -848,7 +848,7 @@ For more information including Dynamic Provisioning and Persistent Volume Claims ### vsphereVolume **Prerequisite:** Kubernetes with vSphere Cloud Provider configured. For cloudprovider -configuration please refer [vSphere getting started guide](/docs/getting-started-guides/vsphere/). +configuration please refer [vSphere getting started guide](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/). {: .note} A `vsphereVolume` is used to mount a vSphere VMDK Volume into your Pod. The contents diff --git a/docs/getting-started-guides/minikube.md b/docs/getting-started-guides/minikube.md index 598b2521a1cf2..3b6b68ba2b0a5 100644 --- a/docs/getting-started-guides/minikube.md +++ b/docs/getting-started-guides/minikube.md @@ -298,7 +298,7 @@ Some drivers will mount a host folder within the VM so that you can easily share | VirtualBox | Linux | /home | /hosthome | | VirtualBox | OSX | /Users | /Users | | VirtualBox | Windows | C://Users | /c/Users | -| VMWare Fusion | OSX | /Users | /Users | +| VMware Fusion | OSX | /Users | /Users | | Xhyve | OSX | /Users | /Users | diff --git a/docs/getting-started-guides/ubuntu/index.md b/docs/getting-started-guides/ubuntu/index.md index a10f746099470..b503e8025d0b0 100644 --- a/docs/getting-started-guides/ubuntu/index.md +++ b/docs/getting-started-guides/ubuntu/index.md @@ -11,7 +11,7 @@ There are multiple ways to run a Kubernetes cluster with Ubuntu. These pages exp - [The Canonical Distribution of Kubernetes](https://www.ubuntu.com/cloud/kubernetes) -The latest version of Kubernetes with upstream binaries. Supports AWS, GCE, Azure, Joyent, OpenStack, VMWare, Bare Metal and localhost deployments. +The latest version of Kubernetes with upstream binaries. Supports AWS, GCE, Azure, Joyent, OpenStack, VMware, Bare Metal and localhost deployments. ### Quick Start diff --git a/docs/getting-started-guides/ubuntu/installation.md b/docs/getting-started-guides/ubuntu/installation.md index d908aed7c9cbd..f02bcb5bdb3fc 100644 --- a/docs/getting-started-guides/ubuntu/installation.md +++ b/docs/getting-started-guides/ubuntu/installation.md @@ -54,7 +54,7 @@ Microsoft Azure | Juju | Ubuntu | flannel | [docs](/docs Google Compute Engine (GCE) | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) Joyent | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) Rackspace | Juju | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) -VMWare vSphere | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) +VMware vSphere | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) Bare Metal (MAAS) | Juju | Ubuntu | flannel, calico | [docs](/docs/getting-started-guides/ubuntu) | | [Commercial](https://ubuntu.com/cloud/kubernetes), [Community](https://github.com/juju-solutions/bundle-kubernetes-core) diff --git a/docs/getting-started-guides/vsphere.md b/docs/getting-started-guides/vsphere.md deleted file mode 100644 index 43b5069dd485d..0000000000000 --- a/docs/getting-started-guides/vsphere.md +++ /dev/null @@ -1,211 +0,0 @@ ---- -reviewers: -- erictune -- jbeda -title: VMware vSphere ---- - -This page covers how to get started with deploying Kubernetes on vSphere and details for how to configure the vSphere Cloud Provider. - -* TOC -{:toc} - -### Getting started with the vSphere Cloud Provider - -Kubernetes comes with *vSphere Cloud Provider*, a cloud provider for vSphere that allows Kubernetes Pods to use vSphere Storage. - -### Deploy Kubernetes on vSphere - -To deploy Kubernetes on vSphere and use the vSphere Cloud Provider, see [Kubernetes-Anywhere](https://github.com/kubernetes/kubernetes-anywhere). - -Detailed steps can be found at the [getting started with Kubernetes-Anywhere on vSphere](https://git.k8s.io/kubernetes-anywhere/phase1/vsphere/README.md) page. - -### vSphere Cloud Provider - -vSphere Cloud Provider allows Kubernetes to use vSphere-managed storage. It supports: - -- Services such as de-duplication and encryption with vSAN, QoS, high availability and data reliability. -- Policy based management at granularity of container volumes. -- Volumes, Persistent Volumes, Storage Classes, dynamic provisioning of volumes, and scalable deployment of Stateful Apps with StatefulSets. - -For more detail visit [vSphere Storage for Kubernetes Documentation](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/index.html). - -Documentation for how to use vSphere managed storage can be found in the [persistent volumes user guide](/docs/concepts/storage/persistent-volumes/#vsphere) and the [volumes user guide](/docs/concepts/storage/volumes/#vspherevolume). - -Examples can be found [here](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere). - -#### Enable vSphere Cloud Provider - -If a Kubernetes cluster has not been deployed using Kubernetes-Anywhere, follow the instructions below to enable the vSphere Cloud Provider. These steps are not needed when using Kubernetes-Anywhere, they will be done as part of the deployment. - -**Step-1** [Create a VM folder](https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.vcenterhost.doc/GUID-031BDB12-D3B2-4E2D-80E6-604F304B4D0C.html) and move Kubernetes Node VMs to this folder. - -**Step-2** Make sure Node VM names must comply with the regex `[a-z](([-0-9a-z]+)?[0-9a-z])?(\.[a-z0-9](([-0-9a-z]+)?[0-9a-z])?)*`. If Node VMs do not comply with this regex, rename them and make it compliant to this regex. - - Node VM names constraints: - - * VM names can not begin with numbers. - * VM names can not have capital letters, any special characters except `.` and `-`. - * VM names can not be shorter than 3 chars and longer than 63. - -**Step-3** Enable disk UUID on Node virtual machines. - -The disk.EnableUUID parameter must be set to "TRUE" for each Node VM. This step is necessary so that the VMDK always presents a consistent UUID to the VM, thus allowing the disk to be mounted properly. - -For each of the virtual machine nodes that will be participating in the cluster, follow the steps below using [govc tool](https://github.com/vmware/govmomi/tree/master/govc) - -* Set up the **govc** environment - - export GOVC_URL='vCenter IP OR FQDN' - export GOVC_USERNAME='vCenter User' - export GOVC_PASSWORD='vCenter Password' - export GOVC_INSECURE=1 - -* Find Node VM Paths - - govc ls /datacenter/vm/ - -* Set disk.EnableUUID to true for all VMs - - govc vm.change -e="disk.enableUUID=1" -vm='VM Path' - -Note: If Kubernetes Node VMs are created from template VM then `disk.EnableUUID=1` can be set on the template VM. VMs cloned from this template, will automatically inherit this property. - -**Step-4** Create and assign Roles to the vSphere Cloud Provider user and vSphere entities. - -Note: if you want to use Administrator account then this step can be skipped. - -vSphere Cloud Provider requires the following minimal set of privileges to interact with vCenter. Please refer [vSphere Documentation Center](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.security.doc/GUID-18071E9A-EED1-4968-8D51-E0B4F526FDA3.html) to know about steps for creating a Custom Role, User and Role Assignment. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    RolesPrivilegesEntitiesPropagate to Children
    manage-k8s-node-vmsResource.AssignVMToPool
    System.Anonymous
    System.Read
    System.View
    VirtualMachine.Config.AddExistingDisk
    VirtualMachine.Config.AddNewDisk
    VirtualMachine.Config.AddRemoveDevice
    VirtualMachine.Config.RemoveDisk
    VirtualMachine.Inventory.Create
    VirtualMachine.Inventory.Delete
    Cluster,
    Hosts,
    VM Folder
    Yes
    manage-k8s-volumesDatastore.AllocateSpace
    Datastore.FileManagement
    System.Anonymous
    System.Read
    System.View
    DatastoreNo
    k8s-system-read-and-spbm-profile-viewStorageProfile.View
    System.Anonymous
    System.Read
    System.View
    vCenterNo
    ReadOnlySystem.Anonymous
    System.Read
    System.View
    Datacenter,
    Datastore Cluster,
    Datastore Storage Folder
    No
    - -**Step-5** Create the vSphere cloud config file (`vsphere.conf`). Cloud config template can be found [here](https://github.com/kubernetes/kubernetes-anywhere/blob/master/phase1/vsphere/vsphere.conf). - -This config file needs to be placed in the shared directory which should be accessible from kubelet container, controller-manager pod, and API server pod. - -**`vsphere.conf` for master node:** - -``` -[Global] - user = "vCenter username for cloud provider" - password = "password" - server = "IP/FQDN for vCenter" - port = "443" #Optional - insecure-flag = "1" #set to 1 if the vCenter uses a self-signed cert - datacenter = "Datacenter name" - datastore = "Datastore name" #Datastore to use for provisioning volumes using storage classes/dynamic provisioning - working-dir = "vCenter VM folder path in which node VMs are located" - vm-name = "VM name of the Master Node" #Optional - vm-uuid = "UUID of the Node VM" # Optional -[Disk] - scsicontrollertype = pvscsi -``` - -Note: **```vm-name``` parameter is introduced in 1.6.4 release.** Both ```vm-uuid``` and ```vm-name``` are optional parameters. If ```vm-name``` is specified then ```vm-uuid``` is not used. If both are not specified then kubelet will get vm-uuid from `/sys/class/dmi/id/product_serial` and query vCenter to find the Node VM's name. - -**`vsphere.conf` for worker nodes:** - -Applicable only to versions 1.6.4 to 1.8.x. For versions earlier than 1.6.4, this file should have all the parameters specified in the master node's `vsphere.conf` file. In version 1.9.0 and later, the worker nodes do not need a cloud config file. - -``` -[Global] - vm-name = "VM name of the Worker Node" -``` - -Below is summary of supported parameters in the `vsphere.conf` file - -* ```user``` is the vCenter username for vSphere Cloud Provider. -* ```password``` is the password for vCenter user specified with `user`. -* ```server``` is the vCenter Server IP or FQDN -* ```port``` is the vCenter Server Port. Default is 443 if not specified. -* ```insecure-flag``` is set to 1 if vCenter used a self-signed certificate. -* ```datacenter``` is the name of the datacenter on which Node VMs are deployed. -* ```datastore``` is the default datastore to use for provisioning volumes using storage classes/dynamic provisioning. -* ```vm-name``` is recently added configuration parameter. This is optional parameter. When this parameter is present, ```vsphere.conf``` file on the worker node does not need vCenter credentials. - - **Note:** ```vm-name``` is added in the release 1.6.4. Prior releases does not support this parameter. - -* ```working-dir``` can be set to empty ( working-dir = ""), if Node VMs are located in the root VM folder. -* ```vm-uuid``` is the VM Instance UUID of virtual machine. ```vm-uuid``` can be set to empty (```vm-uuid = ""```). If set to empty, this will be retrieved from /sys/class/dmi/id/product_serial file on virtual machine (requires root access). - - * ```vm-uuid``` needs to be set in this format - ```423D7ADC-F7A9-F629-8454-CE9615C810F1``` - - * ```vm-uuid``` can be retrieved from Node Virtual machines using following command. This will be different on each node VM. - - cat /sys/class/dmi/id/product_serial | sed -e 's/^VMware-//' -e 's/-/ /' | awk '{ print toupper($1$2$3$4 "-" $5$6 "-" $7$8 "-" $9$10 "-" $11$12$13$14$15$16) }' - -* `datastore` is the default datastore used for provisioning volumes using storage classes. If datastore is located in storage folder or datastore is member of datastore cluster, make sure to specify full datastore path. Make sure vSphere Cloud Provider user has Read Privilege set on the datastore cluster or storage folder to be able to find datastore. - * For datastore located in the datastore cluster, specify datastore as mentioned below - - datastore = "DatastoreCluster/datastore1" - - * For datastore located in the storage folder, specify datastore as mentioned below - - datastore = "DatastoreStorageFolder/datastore1" - -**Step-6** Add flags to controller-manager, API server and Kubelet to enable vSphere Cloud Provider. -* Add following flags to kubelet running on every node and to the controller-manager and API server pods manifest files. - -``` ---cloud-provider=vsphere ---cloud-config= -``` - -Manifest files for API server and controller-manager are generally located at `/etc/kubernetes/manifests`. - -**Step-7** Restart Kubelet on all nodes. - -* Reload kubelet systemd unit file using ```systemctl daemon-reload``` -* Restart kubelet service using ```systemctl restart kubelet.service``` - -Note: After enabling the vSphere Cloud Provider, Node names will be set to the VM names from the vCenter Inventory. - -#### Known issues -Please visit [known issues](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/known-issues.html) for the list of major known issues with Kubernetes vSphere Cloud Provider. - -## Support Level - -For quick support please join VMware Code Slack ([kubernetes](https://vmwarecode.slack.com/messages/kubernetes/)) and post your question. - -IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level --------------------- | ------------ | ------ | ---------- | --------------------------------------------- | --------- | ---------------------------- -Vmware vSphere | Kube-anywhere | Photon OS | Flannel | [docs](/docs/getting-started-guides/vsphere/) | | Community ([@abrarshivani](https://github.com/abrarshivani)), ([@kerneltime](https://github.com/kerneltime)), ([@BaluDontu](https://github.com/BaluDontu)), ([@luomiao](https://github.com/luomiao)), ([@divyenpatel](https://github.com/divyenpatel)) - -If you identify any issues/problems using the vSphere cloud provider, you can create an issue in our repo - [VMware Kubernetes](https://github.com/vmware/kubernetes). - - -For support level information on all solutions, see the [Table of solutions](/docs/getting-started-guides/#table-of-solutions) chart. diff --git a/docs/getting-started-guides/windows/index.md b/docs/getting-started-guides/windows/index.md index cc2c53b32465a..23836bbc7560d 100644 --- a/docs/getting-started-guides/windows/index.md +++ b/docs/getting-started-guides/windows/index.md @@ -444,7 +444,7 @@ Some of these limitations will be addressed by the community in future releases - Hyper-V isolated containers are not supported. - Windows container OS must match the Host OS. If it does not, the pod will get stuck in a crash loop. - Under the networking models of L3 or Host GW, Kubernetes Services are inaccessible to Windows nodes due to a Windows issue. This is not an issue if using OVN/OVS for networking. -- Windows kubelet.exe may fail to start when running on Windows Server under VMWare Fusion [issue 57110](https://github.com/kubernetes/kubernetes/pull/57124) +- Windows kubelet.exe may fail to start when running on Windows Server under VMware Fusion [issue 57110](https://github.com/kubernetes/kubernetes/pull/57124) - Flannel and Weavenet are not yet supported ## Next steps and resources diff --git a/docs/setup/pick-right-solution.md b/docs/setup/pick-right-solution.md index bae8cd08af9d3..bc8b087a44368 100644 --- a/docs/setup/pick-right-solution.md +++ b/docs/setup/pick-right-solution.md @@ -117,9 +117,9 @@ These solutions are combinations of cloud providers and operating systems not co * [Vagrant](/docs/getting-started-guides/coreos/) (uses CoreOS and flannel) * [CloudStack](/docs/getting-started-guides/cloudstack/) (uses Ansible, CoreOS and flannel) -* [Vmware vSphere](/docs/getting-started-guides/vsphere/) (uses Debian) -* [Vmware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (uses Juju, Ubuntu and flannel) -* [Vmware](/docs/getting-started-guides/coreos/) (uses CoreOS and flannel) +* [VMware vSphere](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) +* [VMware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (uses Juju, Ubuntu and flannel) +* [VMware](/docs/getting-started-guides/coreos/) (uses CoreOS and flannel) * [oVirt](/docs/getting-started-guides/ovirt/) * [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) (uses Fedora and flannel) @@ -166,14 +166,14 @@ GCE | CoreOS | CoreOS | flannel | [docs](/docs/gettin Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/bare_metal_offline/) | Community ([@jeffbean](https://github.com/jeffbean)) CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack/) | Community ([@sebgoa](https://github.com/sebgoa)) -Vmware vSphere | Saltstack | Debian | OVS | [docs](/docs/getting-started-guides/vsphere/) | Community ([@imkin](https://github.com/imkin)) +VMware vSphere | any | multi-support | multi-support | [docs](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) | [Community](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/contactus.html) lxd | Juju | Ubuntu | flannel/canal | [docs](/docs/getting-started-guides/ubuntu/local/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) AWS | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) Azure | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) GCE | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) Oracle Cloud | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) Rackspace | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) -Vmware vSphere | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) +VMware vSphere | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) Bare Metal | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) AWS | Saltstack | Debian | AWS | [docs](/docs/getting-started-guides/aws/) | Community ([@justinsb](https://github.com/justinsb)) AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops/) | Community ([@justinsb](https://github.com/justinsb)) From 1c4a23562d20b88db32f1c0484e5ec0f749778fa Mon Sep 17 00:00:00 2001 From: Kai Chen Date: Wed, 7 Mar 2018 13:10:52 -0800 Subject: [PATCH 103/117] Fix broken URL (#7670) --- docs/setup/independent/install-kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/setup/independent/install-kubeadm.md b/docs/setup/independent/install-kubeadm.md index 6e97f37d67e02..fb7e18cff7eac 100644 --- a/docs/setup/independent/install-kubeadm.md +++ b/docs/setup/independent/install-kubeadm.md @@ -24,7 +24,7 @@ see the [Using kubeadm to Create a Cluster](/docs/setup/independent/create-clust * 2 CPUs or more * Full network connectivity between all machines in the cluster (public or private network is fine) * Unique hostname, MAC address, and product_uuid for every node. See [here](https://kubernetes.io/docs/setup/independent/install-kubeadm/#verify-the-mac-address-and-product_uuid-are-unique-for-every-node) for more details. -* Certain ports are open on your machines. See [here](/docs/setup/indenpendent/install-kubeadm/#check-required-ports) for more details. +* Certain ports are open on your machines. See [here](/docs/setup/independent/install-kubeadm/#check-required-ports) for more details. * Swap disabled. You **MUST** disable swap in order for the kubelet to work properly. {% endcapture %} From 3bd0c84182469093ac338241f2f4ec7564f75ce5 Mon Sep 17 00:00:00 2001 From: Aravind Date: Fri, 9 Mar 2018 04:37:11 +0530 Subject: [PATCH 104/117] Removed doc line descibing invocation of swagger UI (#7683) --- docs/concepts/overview/kubernetes-api.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/overview/kubernetes-api.md b/docs/concepts/overview/kubernetes-api.md index e01e1905f24ee..8e10dc2bcfa9d 100644 --- a/docs/concepts/overview/kubernetes-api.md +++ b/docs/concepts/overview/kubernetes-api.md @@ -24,7 +24,7 @@ What constitutes a compatible change and how to change the API are detailed by t ## OpenAPI and Swagger definitions -Complete API details are documented using [Swagger v1.2](http://swagger.io/) and [OpenAPI](https://www.openapis.org/). The Kubernetes apiserver (aka "master") exposes an API that can be used to retrieve the Swagger v1.2 Kubernetes API spec located at `/swaggerapi`. You can also enable a UI to browse the API documentation at `/swagger-ui` by passing the `--enable-swagger-ui=true` flag to apiserver. +Complete API details are documented using [Swagger v1.2](http://swagger.io/) and [OpenAPI](https://www.openapis.org/). The Kubernetes apiserver (aka "master") exposes an API that can be used to retrieve the Swagger v1.2 Kubernetes API spec located at `/swaggerapi`. Starting with Kubernetes 1.4, OpenAPI spec is also available at [`/swagger.json`](https://git.k8s.io/kubernetes/api/openapi-spec/swagger.json). While we are transitioning from Swagger v1.2 to OpenAPI (aka Swagger v2.0), some of the tools such as kubectl and swagger-ui are still using v1.2 spec. OpenAPI spec is in Beta as of Kubernetes 1.5. From ccf51243f7eafa7fa4e453ecf2d3ebeadc113a56 Mon Sep 17 00:00:00 2001 From: nimbcode <36571783+nimbcode@users.noreply.github.com> Date: Thu, 8 Mar 2018 16:56:09 -0800 Subject: [PATCH 105/117] Adding glossary item for Cloud Providers (#7615) --- _data/glossary/cloud-provider.yaml | 10 ++++++++++ 1 file changed, 10 insertions(+) create mode 100644 _data/glossary/cloud-provider.yaml diff --git a/_data/glossary/cloud-provider.yaml b/_data/glossary/cloud-provider.yaml new file mode 100644 index 0000000000000..82ba2f4c22635 --- /dev/null +++ b/_data/glossary/cloud-provider.yaml @@ -0,0 +1,10 @@ +id: cloud-provider +name: Cloud Provider +full-link: /docs/concepts/cluster-administration/cloud-providers +tags: +- community +short-description: > + Cloud provider is a company that offers cloud computing platform that can run Kubernetes clusters. +long-description: > + Cloud providers or sometime called Cloud Service Provider (CSPs) provides cloud computing platforms. They may offer services such as Infrastructure as a Service (IaaS) or Platform as a Service (PaaS). Cloud providers host the Kubernetes cluster and also provide services that interact with the cluster, such as Load Balancers, Storage Classes etc. + From 131324c34af55e3c1a2d6d2053c2443dc0799d51 Mon Sep 17 00:00:00 2001 From: Kai Chen Date: Thu, 8 Mar 2018 18:37:10 -0800 Subject: [PATCH 106/117] Fix the reference to the glossary in the include guide (#7687) --- docs/home/contribute/includes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/home/contribute/includes.md b/docs/home/contribute/includes.md index 0f6099085fd79..c7da92716286a 100644 --- a/docs/home/contribute/includes.md +++ b/docs/home/contribute/includes.md @@ -62,7 +62,7 @@ changed by setting the for_k8s_version variable. ## Glossary -You can reference glossary terms with an inclusion that will automatically update and replace content with the relevant links from [our glossary](docs/reference/glossary/). When the term is moused-over by someone +You can reference glossary terms with an inclusion that will automatically update and replace content with the relevant links from [our glossary](/docs/reference/glossary/). When the term is moused-over by someone using the online documentation, the glossary entry will display a tooltip. The raw data for glossary terms is stored at [https://github.com/kubernetes/website/tree/master/_data/glossary](https://github.com/kubernetes/website/tree/master/_data/glossary), with a YAML file for each glossary term. From 3472cfd5fc1fdfc8af38897388e40d09e3854583 Mon Sep 17 00:00:00 2001 From: WanLinghao Date: Fri, 9 Mar 2018 13:57:11 +0800 Subject: [PATCH 107/117] fix a desription error in sysctl file. (#7666) modified: docs/concepts/cluster-administration/sysctl-cluster.md --- .../cluster-administration/sysctl-cluster.md | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/docs/concepts/cluster-administration/sysctl-cluster.md b/docs/concepts/cluster-administration/sysctl-cluster.md index 6fa32786cb6a7..796c735bb2871 100644 --- a/docs/concepts/cluster-administration/sysctl-cluster.md +++ b/docs/concepts/cluster-administration/sysctl-cluster.md @@ -130,10 +130,13 @@ to schedule those pods onto the right nodes. ## PodSecurityPolicy Annotations -The use of sysctl in pods can be controlled via annotations on the PodSecurityPolicy. +The use of sysctl in pods can be controlled via annotation on the PodSecurityPolicy. -Here is an example, it authorizes binding user creating pod with corresponding -_safe_ and _unsafe_ sysctls. +Sysctl annotation represents a whitelist of allowed safe and unsafe sysctls +in a pod spec. It's a comma-separated list of plain sysctl names or sysctl patterns +(which end in `*`). The string `*` matches all sysctls. + +Here is an example, it authorizes binding user creating pod with corresponding sysctls. ```yaml apiVersion: extensions/v1beta1 @@ -141,8 +144,7 @@ kind: PodSecurityPolicy metadata: name: sysctl-psp annotations: - security.alpha.kubernetes.io/sysctls: 'kernel.shm_rmid_forced' - security.alpha.kubernetes.io/unsafe-sysctls: 'net.ipv4.route.*,kernel.msg*' + security.alpha.kubernetes.io/sysctls: 'net.ipv4.route.*,kernel.msg*' spec: ... ``` From 1800318cec9918239d6c94bfed810ec9a723279b Mon Sep 17 00:00:00 2001 From: AdamDang Date: Mon, 12 Mar 2018 23:43:12 +0800 Subject: [PATCH 108/117] Typo fix "these command"->"these commands" (#7715) "these command"->"these commands" --- docs/tutorials/stateless-application/hello-minikube.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tutorials/stateless-application/hello-minikube.md b/docs/tutorials/stateless-application/hello-minikube.md index f8353290f8a66..24ca4951eeda7 100644 --- a/docs/tutorials/stateless-application/hello-minikube.md +++ b/docs/tutorials/stateless-application/hello-minikube.md @@ -341,7 +341,7 @@ Output: - ingress: disabled ``` -Minikube must be running for these command to take effect. To enable `heapster` addon, for example: +Minikube must be running for these commands to take effect. To enable `heapster` addon, for example: ```shell minikube addons enable heapster From 3f9def637029634c8c285d101897a4b813ce8754 Mon Sep 17 00:00:00 2001 From: Chao Wang Date: Mon, 12 Mar 2018 23:48:13 +0800 Subject: [PATCH 109/117] fix typos (#7710) --- docs/update-user-guide-links.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/update-user-guide-links.py b/docs/update-user-guide-links.py index 94915827eff4b..7c449d73efb08 100644 --- a/docs/update-user-guide-links.py +++ b/docs/update-user-guide-links.py @@ -12,7 +12,7 @@ def find_documents_to_rewrite(): rewrites = [] for doc in moved_docs: location = doc_location(doc) - destinations = get_desinations_for_doc(doc) + destinations = get_destinations_for_doc(doc) if len(destinations) == 0: print("Unable to get possible destinations for %s" % doc) @@ -35,7 +35,7 @@ def doc_location(filename): REDIRECT_REGEX = re.compile("^.*\[(.*)\]\((.*)\)$") -def get_desinations_for_doc(filename): +def get_destinations_for_doc(filename): destination_paths = [] with open(filename) as f: lines = [line.rstrip('\n').rstrip('\r') for line in f.readlines()] From ea0a894f4ea36ff86f307003edfb8828f4dfc44a Mon Sep 17 00:00:00 2001 From: Jordan Liggitt Date: Mon, 12 Mar 2018 19:01:13 -0400 Subject: [PATCH 110/117] Fix outdated links (#7716) --- docs/reference/security.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/reference/security.md b/docs/reference/security.md index 65c778712f9ba..76e3919083984 100644 --- a/docs/reference/security.md +++ b/docs/reference/security.md @@ -19,7 +19,7 @@ We’re extremely grateful for security researchers and users that report vulner To make a report, please email the private [kubernetes-security@googlegroups.com](mailto:kubernetes-security@googlegroups.com) list with the security details and the details expected for [all Kubernetes bug reports](https://git.k8s.io/kubernetes/.github/ISSUE_TEMPLATE.md). -You may encrypt your email to this list using the GPG keys of the [Product Security Team members](https://git.k8s.io/community/contributors/devel/security-release-process.md#product-security-team-pst). Encryption using GPG is NOT required to make a disclosure. +You may encrypt your email to this list using the GPG keys of the [Product Security Team members](https://git.k8s.io/sig-release/security-release-process-documentation/security-release-process.md#product-security-team-pst). Encryption using GPG is NOT required to make a disclosure. ### When Should I Report a Vulnerability? @@ -35,7 +35,7 @@ You may encrypt your email to this list using the GPG keys of the [Product Secur ## Security Vulnerability Response -Each report is acknowledged and analyzed by Product Security Team members within 3 working days. This will set off the [Security Release Process](https://git.k8s.io/community/contributors/devel/security-release-process.md#product-security-team-pst). +Each report is acknowledged and analyzed by Product Security Team members within 3 working days. This will set off the [Security Release Process](https://git.k8s.io/sig-release/security-release-process-documentation/security-release-process.md#disclosures). Any vulnerability information shared with Product Security Team stays within Kubernetes project and will not be disseminated to other projects unless it is necessary to get the issue fixed. From 3aa1fc491c6945f2c4c91d7bee95ade71e25de8d Mon Sep 17 00:00:00 2001 From: Max <2843450+b-m-f@users.noreply.github.com> Date: Mon, 12 Mar 2018 23:04:10 +0000 Subject: [PATCH 111/117] Update ingress.md (#7707) --- docs/concepts/services-networking/ingress.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/concepts/services-networking/ingress.md b/docs/concepts/services-networking/ingress.md index 8715d3ecfeacc..7716da443bcf4 100644 --- a/docs/concepts/services-networking/ingress.md +++ b/docs/concepts/services-networking/ingress.md @@ -60,7 +60,7 @@ kind: Ingress metadata: name: test-ingress annotations: - ingress.kubernetes.io/rewrite-target: / + nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: From b46bb6de8a69446113c4b614b268d4722a1f3043 Mon Sep 17 00:00:00 2001 From: Chi Trung Nguyen Date: Tue, 13 Mar 2018 17:41:12 +0100 Subject: [PATCH 112/117] Update weave-network-policy.md (#7728) typo --- docs/tasks/administer-cluster/weave-network-policy.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tasks/administer-cluster/weave-network-policy.md b/docs/tasks/administer-cluster/weave-network-policy.md index fa86a6baade9c..c7576a99b7406 100644 --- a/docs/tasks/administer-cluster/weave-network-policy.md +++ b/docs/tasks/administer-cluster/weave-network-policy.md @@ -38,7 +38,7 @@ The output is similar to this: ``` NAME READY STATUS RESTARTS AGE IP NODE -weave-net-1t1qg 2/2 Running 0 9d 192.168.2.10 workndoe3 +weave-net-1t1qg 2/2 Running 0 9d 192.168.2.10 worknode3 weave-net-231d7 2/2 Running 1 7d 10.2.0.17 worknodegpu weave-net-7nmwt 2/2 Running 3 9d 192.168.2.131 masternode weave-net-pmw8w 2/2 Running 0 9d 192.168.2.216 worknode2 From b34babb518b3f992b0ed9a0aab82dc5b95439898 Mon Sep 17 00:00:00 2001 From: Zach Corleissen Date: Tue, 13 Mar 2018 11:21:11 -0700 Subject: [PATCH 113/117] Move @bradtopol from reviewer to approver (#7729) --- OWNERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/OWNERS b/OWNERS index cf57dd39218f5..3919e32b431ab 100644 --- a/OWNERS +++ b/OWNERS @@ -2,12 +2,12 @@ reviewers: - tengqm - zhangxiaoyu-zidif - xiangpengzhao -- bradtopol approvers: - heckj - a-mccarthy - abiogenesis-now - bradamant3 +- bradtopol - steveperry-53 - zacharysarah - chenopis From 2085cf666b55af1c0cdeaf78b927bb9c6798bb96 Mon Sep 17 00:00:00 2001 From: Martin Dietze Date: Wed, 14 Mar 2018 16:30:05 +0100 Subject: [PATCH 114/117] Guide for upgrading kubeadm HA clusters. (#7557) * Guide for upgrading kubeadm HA clusters. * kubeadm HA upgrade guide: text changes from code review. * Guide for upgrading kubeadm HA clusters: proposed changes after second round of code review. --- _data/tasks.yml | 1 + .../setup-tools/kubeadm/kubeadm-upgrade.md | 1 + .../independent/create-cluster-kubeadm.md | 1 + .../administer-cluster/kubeadm-upgrade-ha.md | 133 ++++++++++++++++++ 4 files changed, 136 insertions(+) create mode 100644 docs/tasks/administer-cluster/kubeadm-upgrade-ha.md diff --git a/_data/tasks.yml b/_data/tasks.yml index 9fc290e8e1a88..e9210213bc90b 100644 --- a/_data/tasks.yml +++ b/_data/tasks.yml @@ -151,6 +151,7 @@ toc: - docs/tasks/administer-cluster/kubeadm-upgrade-1-7.md - docs/tasks/administer-cluster/kubeadm-upgrade-1-8.md - docs/tasks/administer-cluster/kubeadm-upgrade-1-9.md + - docs/tasks/administer-cluster/kubeadm-upgrade-ha.md - docs/tasks/administer-cluster/namespaces.md - docs/tasks/administer-cluster/namespaces-walkthrough.md - docs/tasks/administer-cluster/dns-horizontal-autoscaling.md diff --git a/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md b/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md index d65b7d5947783..6d115d0fd19e9 100755 --- a/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md +++ b/docs/reference/setup-tools/kubeadm/kubeadm-upgrade.md @@ -23,6 +23,7 @@ Please check these documents out for more detailed how-to-upgrade guidance: * [1.8.x to 1.8.y upgrades](/docs/tasks/administer-cluster/kubeadm-upgrade-1-8/) * [1.8.x to 1.9.x upgrades](/docs/tasks/administer-cluster/kubeadm-upgrade-1-9/) * [1.9.x to 1.9.y upgrades](/docs/tasks/administer-cluster/kubeadm-upgrade-1-9/) + * [1.9.x to 1.9.y HA cluster upgrades](/docs/tasks/administer-cluster/kubeadm-upgrade-ha/) ## kubeadm upgrade plan {#cmd-upgrade-plan} {% include_relative generated/kubeadm_upgrade_plan.md %} diff --git a/docs/setup/independent/create-cluster-kubeadm.md b/docs/setup/independent/create-cluster-kubeadm.md index b8ddcadcff249..658677837292c 100644 --- a/docs/setup/independent/create-cluster-kubeadm.md +++ b/docs/setup/independent/create-cluster-kubeadm.md @@ -485,6 +485,7 @@ Instructions for upgrading kubeadm clusters are available for: * [1.8.x to 1.8.y upgrades](/docs/tasks/administer-cluster/kubeadm-upgrade-1-8/) * [1.8 to 1.9 upgrades/downgrades](/docs/tasks/administer-cluster/kubeadm-upgrade-1-9/) * [1.9.x to 1.9.y upgrades](/docs/tasks/administer-cluster/kubeadm-upgrade-1-9/) + * [1.9.x to 1.9.y HA cluster upgrades](/docs/tasks/administer-cluster/kubeadm-upgrade-ha/) ## Explore other add-ons {#other-addons} diff --git a/docs/tasks/administer-cluster/kubeadm-upgrade-ha.md b/docs/tasks/administer-cluster/kubeadm-upgrade-ha.md new file mode 100644 index 0000000000000..0da21d9bb15a5 --- /dev/null +++ b/docs/tasks/administer-cluster/kubeadm-upgrade-ha.md @@ -0,0 +1,133 @@ +--- +reviewers: +- jamiehannaford +- luxas +- timothysc +- jbeda +title: Upgrading kubeadm HA clusters from 1.9.x to 1.9.y +--- + +{% capture overview %} + +This guide is for upgrading `kubeadm` HA clusters from version 1.9.x to 1.9.y where `y > x`. The term "`kubeadm` HA clusters" refers to clusters of more than one master node created with `kubeadm`. To set up an HA cluster for Kubernetes version 1.9.x `kubeadm` requires additional manual steps. See [Creating HA clusters with kubeadm](/docs/setup/independent/high-availability/) for instructions on how to do this. The upgrade procedure described here targets clusters created following those very instructions. See [Upgrading/downgrading kubeadm clusters between v1.8 to v1.9](/docs/tasks/administer-cluster/kubeadm-upgrade-1-9/) for more instructions on how to create an HA cluster with `kubeadm`. + +{% endcapture %} + +{% capture prerequisites %} + +Before proceeding: + +- You need to have a functional `kubeadm` HA cluster running version 1.9.0 or higher in order to use the process described here. +- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md) carefully. +- Note that `kubeadm upgrade` will not touch any of your workloads, only Kubernetes-internal components. As a best-practice you should back up anything important to you. For example, any application-level state, such as a database and application might depend on (like MySQL or MongoDB) should be backed up beforehand. +- Read [Upgrading/downgrading kubeadm clusters between v1.8 to v1.9](/docs/tasks/administer-cluster/kubeadm-upgrade-1-9/) to learn about the relevant prerequisites. + +{% endcapture %} + +{% capture steps %} + +## Preparation + +Some preparation is needed prior to starting the upgrade. First download the version of `kubeadm` that matches the version of Kubernetes that you are upgrading to: + +```shell +# Use the latest stable release or manually specify a +# released Kubernetes version +export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) +export ARCH=amd64 # or: arm, arm64, ppc64le, s390x +curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /tmp/kubeadm +chmod a+rx /tmp/kubeadm +``` + +Copy this file to `/tmp` on your primary master if necessary. Run this command for checking prerequisites and determining the versions you will receive: + +```shell +/tmp/kubeadm upgrade plan +``` + +If the prerequisites are met you'll get a summary of the software versions kubeadm will upgrade to, like this: + + Upgrade to the latest stable version: + + COMPONENT CURRENT AVAILABLE + API Server v1.9.0 v1.9.2 + Controller Manager v1.9.0 v1.9.2 + Scheduler v1.9.0 v1.9.2 + Kube Proxy v1.9.0 v1.9.2 + Kube DNS 1.14.5 1.14.7 + Etcd 3.2.7 3.1.11 + +**Caution:** Currently the only supported configuration for kubeadm HA clusters requires the use of an externally managed etcd cluster. Upgrading etcd is not supported as a part of the upgrade. If necessary you will have to upgrade the etcd cluster according to [etcd's upgrade instructions](/docs/tasks/administer-cluster/configure-upgrade-etcd/), which is beyond the scope of these instructions. +{: .caution} + +## Upgrading your control plane + +The following procedure must be applied on a single master node and repeated for each subsequent master node sequentially. + +Before initiating the upgrade with `kubeadm` `configmap/kubeadm-config` needs to be modified for the current master host. Replace any hard reference to a master host name with the current master hosts' name: + +```shell +kubectl get configmap -n kube-system kubeadm-config -o yaml >/tmp/kubeadm-config-cm.yaml +sed -i 's/^\([ \t]*nodeName:\).*/\1 /' /tmp/kubeadm-config-cm.yaml +kubectl apply -f /tmp/kubeadm-config-cm.yaml --force +``` + +Now the upgrade process can start. Use the target version determined in the preparation step and run the following command (press “y” when prompted): + +```shell +/tmp/kubeadm upgrade apply v +``` + +If the operation was successful you’ll get a message like this: + + [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.9.2". Enjoy! + +To upgrade the cluster with CoreDNS as the default internal DNS, invoke `kubeadm upgrade apply` with the `--feature-gates=CoreDNS=true` flag. + +Next, manually upgrade your CNI provider + +Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. Check the [addons](/docs/concepts/cluster-administration/addons/) page to find your CNI provider and see if there are additional upgrade steps necessary. + +**Note:** The `kubeadm upgrade apply` step has been known to fail when run initially on the secondary masters (timed out waiting for the restarted static pods to come up). It should succeed if retried after a minute or two. +{: .note} + +## Upgrade base software packages + +At this point all the static pod manifests in your cluster, for example API Server, Controller Manager, Scheduler, Kube Proxy have been upgraded, however the base software, for example `kubelet`, `kubectl`, `kubeadm` installed on your nodes’ OS are still of the old version. For upgrading the base software packages we will upgrade them and restart services on all nodes one by one: + +```shell +# use your distro's package manager, e.g. 'yum' on RH-based systems +# for the versions stick to kubeadm's output (see above) +yum install -y kubelet- kubectl- kubeadm- kubernetes-cni- +systemctl restart kubelet +``` + +In this example an _rpm_-based system is assumed and `yum` is used for installing the upgraded software. On _deb_-based systems it will be `apt-get update` and then `apt-get install =` for all packages. + +Now the new version of the `kubelet` should be running on the host. Verify this using the following command on the respective host: + +```shell +systemctl status kubelet +``` + +Verify that the upgraded node is available again by executing the following from wherever you run `kubectl` commands: + +```shell +kubectl get nodes +``` + +If the `STATUS` column of the above command shows `Ready` for the upgraded host, you can continue (you may have to repeat this for a couple of time before the node gets `Ready`). + +## If something goes wrong + +If the upgrade fails the situation afterwards depends on the phase in which things went wrong: + +1. If `/tmp/kubeadm upgrade apply` failed to upgrade the cluster it will try to perform a rollback. Hence if that happened on the first master, chances are pretty good that the cluster is still intact. + + You can run `/tmp/kubeadm upgrade apply` again as it is idempotent and should eventually make sure the actual state is the desired state you are declaring. You can use `/tmp/kubeadm upgrade apply` to change a running cluster with `x.x.x --> x.x.x` with `--force`, which can be used to recover from a bad state. + +2. If `/tmp/kubeadm upgrade apply` on one of the secondary masters failed you still have a working, upgraded cluster, but with the secondary masters in a somewhat undefined condition. You will have to find out what went wrong and join the secondaries manually. As mentioned above, sometimes upgrading one of the secondary masters fails waiting for the restarted static pods first, but succeeds when the operation is simply repeated after a little pause of one or two minutes. + +{% endcapture %} + +{% include templates/task.md %} From 1549d297835ad3dcfa750dd3f87aa799d0cc4b95 Mon Sep 17 00:00:00 2001 From: Kai Chen Date: Wed, 14 Mar 2018 13:32:03 -0700 Subject: [PATCH 115/117] Point the KubeletConfig doc to the release-1.9 code base (#7696) The version will be updated to v1beta1 with the official 1.10 release. --- docs/tasks/administer-cluster/kubelet-config-file.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/tasks/administer-cluster/kubelet-config-file.md b/docs/tasks/administer-cluster/kubelet-config-file.md index 42e85e6b65fc4..424ad3a9718d5 100644 --- a/docs/tasks/administer-cluster/kubelet-config-file.md +++ b/docs/tasks/administer-cluster/kubelet-config-file.md @@ -27,7 +27,7 @@ providing parameters via a config file, which simplifies node deployment. The subset of the Kubelet's configuration that can be configured via a file is defined by the `KubeletConfiguration` struct -[here (v1alpha1)](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/kubeletconfig/v1alpha1/types.go). +[here (v1alpha1)](https://github.com/kubernetes/kubernetes/blob/release-1.9/pkg/kubelet/apis/kubeletconfig/v1alpha1/types.go). The configuration file must be a JSON or YAML representation of the parameters in this struct. Note that this structure, and thus the config file API, is still considered alpha and is not subject to stability guarantees. From f5558e0e57199138ebda183152c3a008d7d34eb4 Mon Sep 17 00:00:00 2001 From: DiamondYuan <541832074@qq.com> Date: Thu, 15 Mar 2018 04:45:05 +0800 Subject: [PATCH 116/117] Fix js error in homepage (#7680) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit When document.querySelector('#docsToc') is null ,get TypeError Cannot read property 'querySelector' of null 。 --- _includes/footer-scripts.html | 1 + 1 file changed, 1 insertion(+) diff --git a/_includes/footer-scripts.html b/_includes/footer-scripts.html index 091545d12a73d..07f1e86bd8787 100644 --- a/_includes/footer-scripts.html +++ b/_includes/footer-scripts.html @@ -32,6 +32,7 @@ function hideNav(toc){ if (!toc) toc = document.querySelector('#docsToc') + if (!toc) return var container = toc.querySelector('.container') // container is built dynamically, so it may not be present on the first runloop From 49a81e1a69a0d7d0b8f98bf7119583b025517792 Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Wed, 14 Mar 2018 18:40:25 -0700 Subject: [PATCH 117/117] Update the doc on admission webhooks (#7733) --- .../admin/extensible-admission-controllers.md | 317 +++++++++--------- 1 file changed, 163 insertions(+), 154 deletions(-) diff --git a/docs/admin/extensible-admission-controllers.md b/docs/admin/extensible-admission-controllers.md index 7e67c2d252c71..afe5d58308a6e 100644 --- a/docs/admin/extensible-admission-controllers.md +++ b/docs/admin/extensible-admission-controllers.md @@ -4,6 +4,7 @@ reviewers: - lavalamp - whitlockjc - caesarxuchao +- deads2k title: Dynamic Admission Control --- @@ -20,11 +21,169 @@ the following: * They need to be compiled into kube-apiserver. * They are only configurable when the apiserver starts up. -1.7 introduces two alpha features, *Initializers* and *External Admission -Webhooks*, that address these limitations. These features allow admission -controllers to be developed out-of-tree and configured at runtime. +Two features, *Admission Webhooks* (beta in 1.9) and *Initializers* (alpha), +address these limitations. They allow admission controllers to be developed +out-of-tree and configured at runtime. -This page describes how to use Initializers and External Admission Webhooks. +This page describes how to use Admission Webhooks and Initializers. + +## Admission Webhooks + +### What are admission webhooks? + +Admission webhooks are HTTP callbacks that receive admission requests and do +something with them. You can define two types of admission webhooks, +[ValidatingAdmissionWebhooks](/docs/admin/admission-controllers.md#validatingadmissionwebhook-alpha-in-18-beta-in-19) +and +[MutatingAdmissionWebhooks](/docs/admin/admission-controllers.md#mutatingadmissionwebhook-beta-in-19). +With `ValidatingAdmissionWebhooks`, you may reject requests to enforce custom +admission policies. With `MutatingAdmissionWebhooks`, you may change requests to +enforce custom defaults. + +### Experimenting with admission webhooks + +Admission webhooks are essentially part of the cluster control-plane. You should +write and deploy them with great caution. Please read the [user +guides](https://github.com/kubernetes/website/pull/6836/files)(WIP) for +instructions if you intend to write/deploy production-grade admission webhooks. +In the following, we describe how to quickly experiment with admission webhooks. + +### Prerequisites + +* Ensure that the Kubernetes cluster is at least as new as v1.9. + +* Ensure that MutatingAdmissionWebhook and ValidatingAdmissionWebhook + admission controllers are enabled. + [Here](/docs/admin/admission-controllers.md#is-there-a-recommended-set-of-admission-controllers-to-use) + is a recommended set of admission controllers to enable in general. + +* Ensure that the admissionregistration.k8s.io/v1beta1 API is enabled. + +### Write an admission webhook server + +Please refer to the implementation of the [admission webhook +server](https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.1/test/images/webhook/main.go) +that is validated in a Kubernetes e2e test. The webhook handles the +`admissionReview` requests sent by the apiservers, and sends back its decision +wrapped in `admissionResponse`. + +The example admission webhook server leaves the `ClientAuth` field +[empty](https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.1/test/images/webhook/config.go#L48-L49), +which defaults to `NoClientCert`. This means that the webhook server does not +authenticate the identity of the clients, supposedly apiservers. If you need +mutual TLS or other ways to authenticate the clients, see +how to [authenticate apiservers](#authenticate-apiservers). + +### Deploy the admission webhook service + +The webhook server in the e2e test is deployed in the Kubernetes cluster, via +the [deployment API](/docs/api-reference/{{page.version}}/#deployment-v1beta1-apps). +The test also creates a [service](/docs/api-reference/{{page.version}}/#service-v1-core) +as the front-end of the webhook server. See +[code](https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.1/test/e2e/apimachinery/webhook.go#L196). + +You may also deploy your webhooks outside of the cluster. You will need to update +your [webhook client configurations](https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.1/staging/src/k8s.io/api/admissionregistration/v1beta1/types.go#L218) accordingly. + +### Configure admission webhooks on the fly + +You can dynamically configure what resources are subject to what admission +webhooks via +[ValidatingWebhookConfiguration](https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.1/staging/src/k8s.io/api/admissionregistration/v1beta1/types.go#L68) +or +[MutatingWebhookConifuration](https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.1/staging/src/k8s.io/api/admissionregistration/v1beta1/types.go#L98). + +The following is an example `validatingWebhookConfiguration`, a mutating webhook +configuration is similar. + +```yaml +apiVersion: admissionregistration.k8s.io/v1beta1 +kind: ValidatingWebhookConfiguration +metadata: + name: +webhooks: +- name: + rules: + - apiGroups: + - "" + apiVersions: + - v1 + operations: + - CREATE + resources: + - pods + clientConfig: + service: + namespace: + name: + caBundle: +``` + +When an apiserver receives a request that matches one of the `rules`, the +apiserver sends an `admissionReview` request to webhook as specified in the +`clientConfig`. + +After you create the webhook configuration, the system will take a few seconds +to honor the new configuration. + +### Authenticate apiservers + +If your admission webhooks require authentication, you can configure the +apiservers to use basic auth, bearer token, or a cert to authenticate itself to +the webhooks. There are three steps to complete the configuration. + +* When starting the apiserver, specify the location of the admission control + configuration file via the `--admission-control-config-file` flag. + +* In the admission control configuration file, specify where the + MutatingAdmissionWebhook controller and ValidatingAdmissionWebhook controller + should read the credentials. The credentials are stored in kubeConfig files + (yes, the same schema that's used by kubectl), so the field name is + `kubeConfigFile`. Here is an example admission control configuration file: + +```yaml +apiVersion: apiserver.k8s.io/v1alpha1 +kind: AdmissionConfiguration +plugins: +- name: ValidatingAdmissionWebhook + configuration: + apiVersion: apiserver.config.k8s.io/v1alpha1 + kind: WebhookAdmission + kubeConfigFile: +- name: MutatingAdmissionWebhook + configuration: + apiVersion: apiserver.config.k8s.io/v1alpha1 + kind: WebhookAdmission + kubeConfigFile: +``` + +The schema of `admissionConfiguration` is defined +[here](https://github.com/kubernetes/kubernetes/blob/v1.10.0-beta.0/staging/src/k8s.io/apiserver/pkg/apis/apiserver/v1alpha1/types.go#L27). + +* In the kubeConfig file, provide the credentials: + +```yaml +apiVersion: v1 +kind: Config +users: +# DNS name of webhook service, i.e., ..svc, or the URL +# of the webhook server. +- name: 'webhook1.ns1.svc' + user: + client-certificate-data: + client-key-data: +# The `name` supports using * to wildmatch prefixing segments. +- name: '*.webhook-company.org' + user: + password: + username: +# '*' is the default match. +- name: '*' + user: + token: +``` + +Of course you need to set up the webhook server to handle these authentications. ## Initializers @@ -135,153 +294,3 @@ the pods will be stuck in an uninitialized state. Make sure that all expansions of the `` tuple in a `rule` are valid. If they are not, separate them in different `rules`. - -## External Admission Webhooks - -### What are external admission webhooks? - -External admission webhooks are HTTP callbacks that are intended to receive -admission requests and do something with them. What an external admission -webhook does is up to you, but there is an -[interface](https://github.com/kubernetes/kubernetes/blob/v1.7.0-rc.1/pkg/apis/admission/v1alpha1/types.go) -that it must adhere to so that it responds with whether or not the -admission request should be allowed. - -Unlike initializers or the plugin-style admission controllers, external -admission webhooks are not allowed to mutate the admission request in any way. - -Because admission is a high security operation, the external admission webhooks -must support TLS. - -### When to use admission webhooks? - -A simple example use case for an external admission webhook is to do semantic validation -of Kubernetes resources. Suppose that your infrastructure requires that all `Pod` -resources have a common set of labels, and you do not want any `Pod` to be -persisted to Kubernetes if those needs are not met. You could write your -external admission webhook to do this validation and respond accordingly. - -### How are external admission webhooks triggered? - -Whenever a request comes in, the `GenericAdmissionWebhook` admission plugin will -get the list of interested external admission webhooks from -`externalAdmissionHookConfiguration` objects (explained below) and call them in -parallel. If **all** of the external admission webhooks approve the admission -request, the admission chain continues. If **any** of the external admission -webhooks deny the admission request, the admission request will be denied, and -the reason for doing so will be based on the _first_ external admission webhook -denial reason. _This means if there is more than one external admission webhook -that denied the admission request, only the first will be returned to the -user._ If there is an error encountered when calling an external admission -webhook, that request is ignored and will not be used to approve/deny the -admission request. - -**Note:** The admission chain depends solely on the order of the -`--admission-control` option passed to `kube-apiserver`. - -### Enable external admission webhooks - -*External Admission Webhooks* is an alpha feature, so it is disabled by default. -To turn it on, you need to - -* Include "GenericAdmissionWebhook" in the `--admission-control` flag when - starting the apiserver. If you have multiple `kube-apiserver` replicas, all - should have the same flag setting. - -* Enable the dynamic admission controller registration API by adding - `admissionregistration.k8s.io/v1alpha1` to the `--runtime-config` flag passed - to `kube-apiserver`, e.g. - `--runtime-config=admissionregistration.k8s.io/v1alpha1`. Again, all replicas - should have the same flag setting. - -### Write a webhook admission controller - -See [caesarxuchao/example-webhook-admission-controller](https://github.com/caesarxuchao/example-webhook-admission-controller) -for an example webhook admission controller. - -The communication between the webhook admission controller and the apiserver, or -more precisely, the GenericAdmissionWebhook admission controller, needs to be -TLS secured. You need to generate a CA cert and use it to sign the server cert -used by your webhook admission controller. The pem formatted CA cert is supplied -to the apiserver via the dynamic registration API -`externaladmissionhookconfigurations.clientConfig.caBundle`. - -For each request received by the apiserver, the GenericAdmissionWebhook -admission controller sends an -[admissionReview](https://github.com/kubernetes/kubernetes/blob/v1.7.0-rc.1/pkg/apis/admission/v1alpha1/types.go#L27) -to the relevant webhook admission controller. The webhook admission controller -gathers information like `object`, `oldobject`, and `userInfo`, from -`admissionReview.spec`, sends back a response with the body also being the -`admissionReview`, whose `status` field is filled with the admission decision. - -### Deploy the webhook admission controller - -See [caesarxuchao/example-webhook-admission-controller deployment](https://github.com/caesarxuchao/example-webhook-admission-controller/tree/master/deployment) -for an example deployment. - -The webhook admission controller should be deployed via the -[deployment API](/docs/api-reference/{{page.version}}/#deployment-v1beta1-apps). -You also need to create a -[service](/docs/api-reference/{{page.version}}/#service-v1-core) as the -front-end of the deployment. - -### Configure webhook admission controller on the fly - -You can configure what webhook admission controllers are enabled and what -resources are subject to the admission controller via creating -externaladmissionhookconfigurations. - -We suggest that you first deploy the webhook admission controller and make sure -it is working properly before creating the externaladmissionhookconfigurations. -Otherwise, depending whether the webhook is configured as fail open or fail -closed, operations will be unconditionally accepted or rejected. - -The following is an example `externaladmissionhookconfiguration`: - -```yaml -apiVersion: admissionregistration.k8s.io/v1alpha1 -kind: ExternalAdmissionHookConfiguration -metadata: - name: example-config -externalAdmissionHooks: -- name: pod-image.k8s.io - rules: - - apiGroups: - - "" - apiVersions: - - v1 - operations: - - CREATE - resources: - - pods - failurePolicy: Ignore - clientConfig: - caBundle: - service: - name: - namespace: -``` - -For a request received by the apiserver, if the request matches any of the -`rules` of an `externalAdmissionHook`, the `GenericAdmissionWebhook` admission -controller will send an `admissionReview` request to the `externalAdmissionHook` -to ask for admission decision. - -The `rule` is similar to the `rule` in `initializerConfiguration`, with two -differences: - -* The addition of the `operations` field, specifying what operations the webhook - is interested in; - -* The `resources` field accepts subresources in the form or resource/subresource. - -Make sure that all expansions of the `` tuple -in a `rule` are valid. If they are not, separate them to different `rules`. - -You can also specify the `failurePolicy`. In 1.7, the system supports `Ignore` -and `Fail` policies, meaning that upon a communication error with the webhook -admission controller, the `GenericAdmissionWebhook` can admit or reject the -operation based on the configured policy. - -After you create the `externalAdmissionHookConfiguration`, the system will take a few -seconds to honor the new configuration.