diff --git a/content/en/blog/_posts/2018-06-26-kubernetes-1.11-release-announcement.md b/content/en/blog/_posts/2018-06-26-kubernetes-1.11-release-announcement.md
index a4a964e43..d28a628c3 100644
--- a/content/en/blog/_posts/2018-06-26-kubernetes-1.11-release-announcement.md
+++ b/content/en/blog/_posts/2018-06-26-kubernetes-1.11-release-announcement.md
@@ -60,7 +60,7 @@ If you’re interested in exploring these features more in depth, check back in
* Day 1: [IPVS-Based In-Cluster Service Load Balancing Graduates to General Availability](/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/)
* Day 2: [CoreDNS Promoted to General Availability](/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/)
* Day 3: [Dynamic Kubelet Configuration Moves to Beta](/blog/2018/07/11/dynamic-kubelet-configuration/)
-* Day 4: [Resizing Persistent Volumes using Kubernetes](/blog/2018/07/11/resizing-persistent-volumes-using-kubernetes/)
+* Day 4: [Resizing Persistent Volumes using Kubernetes](/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/)
## Release team
diff --git a/content/en/blog/_posts/2018-08-02-dynamically-expand-volume-csi.md b/content/en/blog/_posts/2018-08-02-dynamically-expand-volume-csi.md
new file mode 100644
index 000000000..5a071d974
--- /dev/null
+++ b/content/en/blog/_posts/2018-08-02-dynamically-expand-volume-csi.md
@@ -0,0 +1,161 @@
+---
+layout: blog
+title: 'Dynamically Expand Volume with CSI and Kubernetes'
+date: 2018-08-02
+---
+
+**Author**: Orain Xiong (Co-Founder, WoquTech)
+
+_There is a very powerful storage subsystem within Kubernetes itself, covering a fairly broad spectrum of use cases. Whereas, when planning to build a product-grade relational database platform with Kubernetes, we face a big challenge: coming up with storage. This article describes how to extend latest Container Storage Interface 0.2.0 and integrate with Kubernetes, and demonstrates the essential facet of dynamically expanding volume capacity._
+
+## Introduction
+
+As we focalize our customers, especially in financial space, there is a huge upswell in the adoption of container orchestration technology.
+
+They are looking forward to open source solutions to redesign already existing monolithic applications, which have been running for several years on virtualization infrastructure or bare metal.
+
+Considering extensibility and the extent of technical maturity, Kubernetes and Docker are at the very top of the list. But migrating monolithic applications to a distributed orchestration like Kubernetes is challenging, the relational database is critical for the migration.
+
+With respect to the relational database, we should pay attention to storage. There is a very powerful storage subsystem within Kubernetes itself. It is very useful and covers a fairly broad spectrum of use cases. When planning to run a relational database with Kubernetes in production, we face a big challenge: coming up with storage. There are still some fundamental functionalities which are left unimplemented. Specifically, dynamically expanding volume. It sounds boring but is highly required, except for actions like create and delete and mount and unmount.
+
+Currently, expanding volume is only available with those storage provisioners:
+
+* gcePersistentDisk
+* awsElasticBlockStore
+* OpenStack Cinder
+* glusterfs
+* rbd
+
+In order to enable this feature, we should set feature gate `ExpandPersistentVolumes` true and turn on the `PersistentVolumeClaimResize` admission plugin. Once `PersistentVolumeClaimResize` has been enabled, resizing will be allowed by a Storage Class whose `allowVolumeExpansion` field is set to true.
+
+Unfortunately, dynamically expanding volume through the Container Storage Interface (CSI) and Kubernetes is unavailable, even though the underlying storage providers have this feature.
+
+This article will give a simplified view of CSI, followed by a walkthrough of how to introduce a new expanding volume feature on the existing CSI and Kubernetes. Finally, the article will demonstrate how to dynamically expand volume capacity.
+
+## Container Storage Interface (CSI)
+
+To have a better understanding of what we're going to do, the first thing we need to know is what the Container Storage Interface is. Currently, there are still some problems for already existing storage subsystem within Kubernetes. Storage driver code is maintained in the Kubernetes core repository which is difficult to test. But beyond that, Kubernetes needs to give permissions to storage vendors to check code into the Kubernetes core repository. Ideally, that should be implemented externally.
+
+CSI is designed to define an industry standard that will enable storage providers who enable CSI to be available across container orchestration systems that support CSI.
+
+This diagram depicts a kind of high-level Kubernetes archetypes integrated with CSI:
+
+![csi diagram](/images/blog/2018-08-02-dynamically-expand-volume-csi/csi-diagram.png)
+
+* Three new external components are introduced to decouple Kubernetes and Storage Provider logic
+* Blue arrows present the conventional way to call against API Server
+* Red arrows present gRPC to call against Volume Driver
+
+For more details, please visit: https://github.com/container-storage-interface/spec/blob/master/spec.md
+
+## Extend CSI and Kubernetes
+
+In order to enable the feature of expanding volume atop Kubernetes, we should extend several components including CSI specification, “in-tree” volume plugin, external-provisioner and external-attacher.
+
+## Extend CSI spec
+
+The feature of expanding volume is still undefined in latest CSI 0.2.0. The new 3 RPCs, including `RequiresFSResize` and `ControllerResizeVolume` and `NodeResizeVolume`, should be introduced.
+
+```
+service Controller {
+ rpc CreateVolume (CreateVolumeRequest)
+ returns (CreateVolumeResponse) {}
+……
+ rpc RequiresFSResize (RequiresFSResizeRequest)
+ returns (RequiresFSResizeResponse) {}
+ rpc ControllerResizeVolume (ControllerResizeVolumeRequest)
+ returns (ControllerResizeVolumeResponse) {}
+}
+
+service Node {
+ rpc NodeStageVolume (NodeStageVolumeRequest)
+ returns (NodeStageVolumeResponse) {}
+……
+ rpc NodeResizeVolume (NodeResizeVolumeRequest)
+ returns (NodeResizeVolumeResponse) {}
+}
+```
+
+## Extend “In-Tree” Volume Plugin
+
+In addition to the extend CSI specification, the `csiPlugin` interface within Kubernetes should also implement `expandablePlugin`. The `csiPlugin` interface will expand `PersistentVolumeClaim` representing for `ExpanderController`.
+
+
+```go
+type ExpandableVolumePlugin interface {
+VolumePlugin
+ExpandVolumeDevice(spec Spec, newSize resource.Quantity, oldSize resource.Quantity) (resource.Quantity, error)
+RequiresFSResize() bool
+}
+```
+
+### Implement Volume Driver
+
+Finally, to abstract complexity of the implementation, we should hard code the separate storage provider management logic into the following functions which is well-defined in the CSI specification:
+
+* CreateVolume
+* DeleteVolume
+* ControllerPublishVolume
+* ControllerUnpublishVolume
+* ValidateVolumeCapabilities
+* ListVolumes
+* GetCapacity
+* ControllerGetCapabilities
+* RequiresFSResize
+* ControllerResizeVolume
+
+## Demonstration
+
+Let’s demonstrate this feature with a concrete user case.
+
+* Create storage class for CSI storage provisioner
+
+```yaml
+allowVolumeExpansion: true
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: csi-qcfs
+parameters:
+ csiProvisionerSecretName: orain-test
+ csiProvisionerSecretNamespace: default
+provisioner: csi-qcfsplugin
+reclaimPolicy: Delete
+volumeBindingMode: Immediate
+```
+
+* Deploy CSI Volume Driver including storage provisioner `csi-qcfsplugin` across Kubernetes cluster
+
+* Create PVC `qcfs-pvc` which will be dynamically provisioned by storage class `csi-qcfs`
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: qcfs-pvc
+ namespace: default
+....
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 300Gi
+ storageClassName: csi-qcfs
+```
+
+* Create MySQL 5.7 instance to use PVC `qcfs-pvc`
+* In order to mirror the exact same production-level scenario, there are actually two different types of workloads including:
+ * Batch insert to make MySQL consuming more file system capacity
+ * Surge query request
+* Dynamically expand volume capacity through edit pvc `qcfs-pvc` configuration
+
+The Prometheus and Grafana integration allows us to visualize corresponding critical metrics.
+
+![prometheus grafana](/images/blog/2018-08-02-dynamically-expand-volume-csi/prometheus-grafana.png)
+
+We notice that the middle reading shows MySQL datafile size increasing slowly during bulk inserting. At the same time, the bottom reading shows file system expanding twice in about 20 minutes, from 300 GiB to 400 GiB and then 500 GiB. Meanwhile, the upper reading shows the whole process of expanding volume immediately completes and hardly impacts MySQL QPS.
+
+## Conclusion
+
+Regardless of whatever infrastructure applications have been running on, the database is always a critical resource. It is essential to have a more advanced storage subsystem out there to fully support database requirements. This will help drive the more broad adoption of cloud native technology.
diff --git a/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md b/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md
new file mode 100644
index 000000000..3d9919543
--- /dev/null
+++ b/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md
@@ -0,0 +1,193 @@
+---
+layout: blog
+title: 'Out of the Clouds onto the Ground: How to Make Kubernetes Production Grade Anywhere'
+date: 2018-08-03
+---
+
+**Authors**: Steven Wong (VMware), Michael Gasch (VMware)
+
+This blog offers some guidelines for running a production grade Kubernetes cluster in an environment like an on-premise data center or edge location.
+
+What does it mean to be “production grade”?
+
+* The installation is secure
+* The deployment is managed with a repeatable and recorded process
+* Performance is predictable and consistent
+* Updates and configuration changes can be safely applied
+* Logging and monitoring is in place to detect and diagnose failures and resource shortages
+* Service is “highly available enough” considering available resources, including constraints on money, physical space, power, etc.
+* A recovery process is available, documented, and tested for use in the event of failures
+
+In short, production grade means anticipating accidents and preparing for recovery with minimal pain and delay.
+
+This article is directed at on-premise Kubernetes deployments on a hypervisor or bare-metal platform, facing finite backing resources compared to the expansibility of the major public clouds. However, some of these recommendations may also be useful in a public cloud if budget constraints limit the resources you choose to consume.
+
+A single node bare-metal Minikube deployment may be cheap and easy, but is not production grade. Conversely, you’re not likely to achieve Google’s Borg experience in a retail store, branch office, or edge location, nor are you likely to need it.
+
+This blog offers some guidance on achieving a production worthy Kubernetes deployment, even when dealing with some resource constraints.
+
+![without incidence](/images/blog/2018-08-03-make-kubernetes-production-grade-anywhere/without-incidence.png)
+
+## Critical components in a Kubernetes cluster
+
+Before we dive into the details, it is critical to understand the overall Kubernetes architecture.
+
+A Kubernetes cluster is a highly distributed system based on a control plane and clustered worker node architecture as depicted below.
+
+![api server](/images/blog/2018-08-03-make-kubernetes-production-grade-anywhere/api-server.png)
+
+Typically the API server, Controller Manager and Scheduler components are co-located within multiple instances of control plane (aka Master) nodes. Master nodes usually include etcd too, although there are high availability and large cluster scenarios that call for running etcd on independent hosts. The components can be run as containers, and optionally be supervised by Kubernetes, i.e. running as statics pods.
+
+For high availability, redundant instances of these components are used. The importance and required degree of redundancy varies.
+
+### Kubernetes components from an HA perspective
+
+![kubernetes components HA](/images/blog/2018-08-03-make-kubernetes-production-grade-anywhere/kubernetes-components-ha.png)
+
+Risks to these components include hardware failures, software bugs, bad updates, human errors, network outages, and overloaded systems resulting in resource exhaustion. Redundancy can mitigate the impact of many of these hazards. In addition, the resource scheduling and high availability features of a hypervisor platform can be useful to surpass what can be achieved using the Linux operating system, Kubernetes, and a container runtime alone.
+
+The API Server uses multiple instances behind a load balancer to achieve scale and availability. The load balancer is a critical component for purposes of high availability. Multiple DNS API Server ‘A’ records might be an alternative if you don’t have a load balancer.
+
+The kube-scheduler and kube-controller-manager engage in a leader election process, rather than utilizing a load balancer. Since a [cloud-controller-manager](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/) is used for selected types of hosting infrastructure, and these have implementation variations, they will not be discussed, beyond indicating that they are a control plane component.
+
+Pods running on Kubernetes worker nodes are managed by the kubelet agent. Each worker instance runs the kubelet agent and a [CRI-compatible](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) container runtime. Kubernetes itself is designed to monitor and recover from worker node outages. But for critical workloads, hypervisor resource management, workload isolation and availability features can be used to enhance availability and make performance more predictable.
+
+## etcd
+
+etcd is the persistent store for all Kubernetes objects. The availability and recoverability of the etcd cluster should be the first consideration in a production-grade Kubernetes deployment.
+
+A five-node etcd cluster is a best practice if you can afford it. Why? Because you could engage in maintenance on one and still tolerate a failure. A three-node cluster is the minimum [recommendation](https://coreos.com/etcd/docs/latest/v2/admin_guide.html#optimal-cluster-size) for production-grade service, even if only a single hypervisor host is available. More than seven nodes is not recommended except for [very large installations](https://monzo.com/blog/2017/11/29/very-robust-etcd/) straddling multiple availability zones.
+
+The minimum recommendation for hosting an etcd cluster node is 2GB of RAM with 8GB of SSD-backed disk. Usually, 8GB RAM and a 20GB disk will be enough. Disk performance affects failed node recovery time. See https://coreos.com/etcd/docs/latest/op-guide/hardware.html for more on this.
+
+### Consider multiple etcd clusters in special situations
+
+For very large Kubernetes clusters, consider using a separate etcd cluster for Kubernetes events so that event storms do not impact the main Kubernetes API service. If you use flannel networking, it retains configuration in etcd and may have differing version requirements than Kubernetes, which can complicate etcd backup -- consider using a dedicated etcd cluster for flannel.
+
+## Single host deployments
+
+The availability risk list includes hardware, software and people. If you are limited to a single host, the use of redundant storage, error-correcting memory and dual power supplies can reduce hardware failure exposure. Running a hypervisor on the physical host will allow operation of redundant software components and add operational advantages related to deployment, upgrade, and resource consumption governance, with predictable and repeatable performance under stress. For example, even if you can only afford to run singletons of the master services, they need to be protected from overload and resource exhaustion while competing with your application workload. A hypervisor can be more effective and easier to manage than configuring Linux scheduler priorities, cgroups, Kubernetes flags, etc.
+
+If resources on the host permit, you can deploy three etcd VMs. Each of the etcd VMs should be backed by a different physical storage device, or they should use separate partitions of a backing store using redundancy (mirroring, RAID, etc).
+
+Dual redundant instances of the API server, scheduler and controller manager would be the next upgrade, if your single host has the resources.
+
+### Single host deployment options, least production worthy to better
+
+![single host deployment](/images/blog/2018-08-03-make-kubernetes-production-grade-anywhere/single-host-deployment.png)
+
+## Dual host deployments
+
+With two hosts, storage concerns for etcd are the same as a single host, you want redundancy. And you would preferably run 3 etcd instances. Although possibly counter-intuitive, it is better to concentrate all etcd nodes on a single host. You do not gain reliability by doing a 2+1 split across two hosts - because loss of the node holding the majority of etcd instances results in an outage, whether that majority is 2 or 3. If the hosts are not identical, put the whole etcd cluster on the most reliable host.
+
+Running redundant API Servers, kube-schedulers, and kube-controller-managers is recommended. These should be split across hosts to minimize risk due to container runtime, OS and hardware failures.
+
+Running a hypervisor layer on the physical hosts will allow operation of redundant software components with resource consumption governance, and can have planned maintenance operational advantages.
+
+### Dual host deployment options, least production worthy to better
+
+![dual host deployment](/images/blog/2018-08-03-make-kubernetes-production-grade-anywhere/dual-host-deployment.png)
+
+Triple (or larger) host deployments -- Moving into uncompromised production-grade service
+Splitting etcd across three hosts is recommended. A single hardware failure will reduce application workload capacity, but should not result in a complete service outage.
+
+With very large clusters, more etcd instances will be required.
+
+Running a hypervisor layer offers operational advantages and better workload isolation. It is beyond the scope of this article, but at the three-or-more host level, advanced features may be available (clustered redundant shared storage, resource governance with dynamic load balancing, automated health monitoring with live migration or failover).
+
+### Triple (or more) host options, least production worthy to better
+
+![triple host deployment](/images/blog/2018-08-03-make-kubernetes-production-grade-anywhere/triple-host-deployment.png)
+
+## Kubernetes configuration settings
+Master and Worker nodes should be protected from overload and resource exhaustion. Hypervisor features can be used to isolate critical components and reserve resources. There are also Kubernetes configuration settings that can throttle things like API call rates and pods per node. Some install suites and commercial distributions take care of this, but if you are performing a custom Kubernetes deployment, you may find that the defaults are not appropriate, particularly if your resources are small or your cluster is large.
+
+Resource consumption by the control plane will correlate with the number of pods and the pod churn rate. Very large and very small clusters will benefit from non-default [settings](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/) of kube-apiserver request throttling and memory. Having these too high can lead to request limit exceeded and out of memory errors.
+
+On worker nodes, [Node Allocatable](https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/) should be configured based on a reasonable supportable workload density at each node. Namespaces can be created to subdivide the worker node cluster into multiple virtual clusters with resource CPU and memory [quotas](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/). Kubelet handling of [out of resource](https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/) conditions can be configured.
+
+## Security
+
+Every Kubernetes cluster has a cluster root Certificate Authority (CA). The Controller Manager, API Server, Scheduler, kubelet client, kube-proxy and administrator certificates need to be generated and installed. If you use an install tool or a distribution this may be handled for you. A manual process is described [here](https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md). You should be prepared to reinstall certificates in the event of node replacements or expansions.
+
+As Kubernetes is entirely API driven, controlling and limiting who can access the cluster and what actions they are allowed to perform is essential. Encryption and authentication options are addressed in this [documentation](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/).
+
+Kubernetes application workloads are based on container images. You want the source and content of these images to be trustworthy. This will almost always mean that you will host a local container image repository. Pulling images from the public Internet can present both reliability and security issues. You should choose a repository that supports image signing, security scanning, access controls on pushing and pulling images, and logging of activity.
+
+Processes must be in place to support applying updates for host firmware, hypervisor, OS, Kubernetes, and other dependencies. Version monitoring should be in place to support audits.
+
+Recommendations:
+
+* Tighten security settings on the control plane components beyond defaults (e.g., [locking down worker nodes](http://blog.kontena.io/locking-down-kubernetes-workers/))
+* Utilize [Pod Security Policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/)
+* Consider the [NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) integration available with your networking solution, including how you will accomplish tracing, monitoring and troubleshooting.
+* Use RBAC to drive authorization decisions and enforcement.
+* Consider physical security, especially when deploying to edge or remote office locations that may be unattended. Include storage encryption to limit exposure from stolen devices and protection from attachment of malicious devices like USB keys.
+* Protect Kubernetes plain-text cloud provider credentials (access keys, tokens, passwords, etc.)
+
+Kubernetes [secret](https://kubernetes.io/docs/concepts/configuration/secret/) objects are appropriate for holding small amounts of sensitive data. These are retained within etcd. These can be readily used to hold credentials for the Kubernetes API but there are times when a workload or an extension of the cluster itself needs a more full-featured solution. The HashiCorp Vault project is a popular solution if you need more than the built-in secret objects can provide.
+
+## Disaster Recovery and Backup
+
+![disaster recovery](/images/blog/2018-08-03-make-kubernetes-production-grade-anywhere/disaster-recovery.png)
+
+Utilizing redundancy through the use of multiple hosts and VMs helps reduce some classes of outages, but scenarios such as a sitewide natural disaster, a bad update, getting hacked, software bugs, or human error could still result in an outage.
+
+A critical part of a production deployment is anticipating a possible future recovery.
+
+It’s also worth noting that some of your investments in designing, documenting, and automating a recovery process might also be re-usable if you need to do large-scale replicated deployments at multiple sites.
+
+Elements of a DR plan include backups (and possibly replicas), replacements, a planned process, people who can carry out the process, and recurring training. Regular test exercises and [chaos engineering principles](https://github.com/dastergon/awesome-chaos-engineering) can be used to audit your readiness.
+
+Your availability requirements might demand that you retain local copies of the OS, Kubernetes components, and container images to allow recovery even during an Internet outage. The ability to deploy replacement hosts and nodes in an “air-gapped” scenario can also offer security and speed of deployment advantages.
+
+All Kubernetes objects are stored on etcd. Periodically backing up the etcd cluster data is important to recover Kubernetes clusters under disaster scenarios, such as losing all master nodes.
+
+Backing up an etcd cluster can be accomplished with etcd’s [built-in](https://coreos.com/etcd/docs/latest/op-guide/recovery.html) snapshot mechanism, and copying the resulting file to storage in a different failure domain. The snapshot file contains all the Kubernetes states and critical information. In order to keep the sensitive Kubernetes data safe, encrypt the snapshot files.
+
+Using disk volume based snapshot recovery of etcd can have issues; see [#40027](https://github.com/kubernetes/kubernetes/issues/40027). API-based backup solutions (e.g., [Ark](https://github.com/heptio/ark)) can offer more granular recovery than a etcd snapshot, but also can be slower. You could utilize both snapshot and API-based backups, but you should do one form of etcd backup as a minimum.
+
+Be aware that some Kubernetes extensions may maintain state in independent etcd clusters, on persistent volumes, or through other mechanisms. If this state is critical, it should have a backup and recovery plan.
+
+Some critical state is held outside etcd. Certificates, container images, and other configuration- and operation-related state may be managed by your automated install/update tooling. Even if these items can be regenerated, backup or replication might allow for faster recovery after a failure. Consider backups with a recovery plan for these items:
+
+* Certificate and key pairs
+ * CA
+ * API Server
+ * Apiserver-kubelet-client
+ * ServiceAccount signing
+ * “Front proxy”
+ * Front proxy client
+* Critical DNS records
+* IP/subnet assignments and reservations
+* External load-balancers
+* kubeconfig files
+* LDAP or other authentication details
+* Cloud provider specific account and configuration data
+
+## Considerations for your production workloads
+Anti-affinity specifications can be used to split clustered services across backing hosts, but at this time the settings are used only when the pod is scheduled. This means that Kubernetes can restart a failed node of your clustered application, but does not have a native mechanism to rebalance after a fail back. This is a topic worthy of a separate blog, but supplemental logic might be useful to achieve optimal workload placements after host or worker node recoveries or expansions. The [Pod Priority and Preemption feature](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) can be used to specify a preferred triage in the event of resource shortages caused by failures or bursting workloads.
+
+For stateful services, external attached volume mounts are the standard Kubernetes recommendation for a non-clustered service (e.g., a typical SQL database). At this time Kubernetes managed snapshots of these external volumes is in the category of a [roadmap feature request](https://docs.google.com/presentation/d/1dgxfnroRAu0aF67s-_bmeWpkM1h2LCxe6lB1l1oS0EQ/edit#slide=id.g3ca07c98c2_0_47), likely to align with the Container Storage Interface (CSI) integration. Thus performing backups of such a service would involve application specific, in-pod activity that is beyond the scope of this document. While awaiting better Kubernetes support for a snapshot and backup workflow, running your database service in a VM rather than a container, and exposing it to your Kubernetes workload may be worth considering.
+
+Cluster-distributed stateful services (e.g., Cassandra) can benefit from splitting across hosts, using [local persistent volumes](https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/#disclaimer) if resources allow. This would require deploying multiple Kubernetes worker nodes (could be VMs on hypervisor hosts) to preserve a quorum under single point failures.
+
+## Other considerations
+
+[Logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/) and [metrics](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/) (if collected and persistently retained) are valuable to diagnose outages, but given the variety of technologies available it will not be addressed in this blog. If Internet connectivity is available, it may be desirable to retain logs and metrics externally at a central location.
+
+Your production deployment should utilize an automated installation, configuration and update tool (e.g., [Ansible](https://github.com/kubernetes-incubator/kubespray), [BOSH](https://github.com/cloudfoundry-incubator/kubo-deployment), [Chef](https://github.com/chef-cookbooks/kubernetes), [Juju](https://kubernetes.io/docs/getting-started-guides/ubuntu/installation/), [kubeadm](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/), [Puppet](https://forge.puppet.com/puppetlabs/kubernetes), etc.). A manual process will have repeatability issues, be labor intensive, error prone, and difficult to scale. [Certified distributions](https://www.cncf.io/certification/software-conformance/#logos) are likely to include a facility for retaining configuration settings across updates, but if you implement your own install and config toolchain, then retention, backup and recovery of the configuration artifacts is essential. Consider keeping your deployment components and settings under a version control system such as Git.
+
+## Outage recovery
+
+[Runbooks](https://en.wikipedia.org/wiki/Runbook) documenting recovery procedures should be tested and retained offline -- perhaps even printed. When an on-call staff member is called up at 2 am on a Friday night, it may not be a great time to improvise. Better to execute from a pre-planned, tested checklist -- with shared access by remote and onsite personnel.
+
+## Final thoughts
+
+![airplane](/images/blog/2018-08-03-make-kubernetes-production-grade-anywhere/airplane.png)
+
+Buying a ticket on a commercial airline is convenient and safe. But when you travel to a remote location with a short runway, that commercial Airbus A320 flight isn’t an option. This doesn’t mean that air travel is off the table. It does mean that some compromises are necessary.
+
+The adage in aviation is that on a single engine aircraft, an engine failure means you crash. With twin engines, at the very least, you get more choices of where you crash. Kubernetes on a small number of hosts is similar, and if your business case justifies it, you might scale up to a larger fleet of mixed large and small vehicles (e.g., FedEx, Amazon).
+
+Those designing a production-grade Kubernetes solution have a lot of options and decisions. A blog-length article can’t provide all the answers, and can’t know your specific priorities. We do hope this offers a checklist of things to consider, along with some useful guidance. Some options were left “on the cutting room floor” (e.g., running Kubernetes components using [self-hosting](https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.9.md#optional-and-alpha-in-v19-self-hosting) instead of static pods). These might be covered in a follow up if there is interest. Also, Kubernetes’ high enhancement rate means that if your search engine found this article after 2019, some content might be past the “sell by” date.
diff --git a/content/en/blog/_posts/2018-08-10-introducing-kubebuilder.md b/content/en/blog/_posts/2018-08-10-introducing-kubebuilder.md
new file mode 100644
index 000000000..d5c6a3bba
--- /dev/null
+++ b/content/en/blog/_posts/2018-08-10-introducing-kubebuilder.md
@@ -0,0 +1,51 @@
+---
+layout: blog
+title: 'Introducing Kubebuilder: an SDK for building Kubernetes APIs using CRDs'
+date: 2018-08-10
+---
+
+**Author**: Phillip Wittrock (Google), Sunil Arora (Google)
+
+[kubebuilder-repo]: https://github.com/kubernetes-sigs/kubebuilder
+[controller-runtime]: https://github.com/kubernetes-sigs/controller-runtime
+[SIG-APIMachinery]: https://github.com/kubernetes/community/tree/master/sig-api-machinery
+[mailing-list]: https://groups.google.com/forum/#!forum/kubernetes-sig-api-machinery
+[slack-channel]: https://slack.k8s.io/#kubebuilder
+[kubebuilder-book]: https://book.kubebuilder.io
+[open-an-issue]: https://github.com/kubernetes-sigs/kubebuilder/issues/new
+
+
+How can we enable applications such as MySQL, Spark and Cassandra to manage themselves just like Kubernetes Deployments and Pods do? How do we configure these applications as their own first class APIs instead of a collection of StatefulSets, Services, and ConfigMaps?
+
+We have been working on a solution and are happy to introduce [*kubebuilder*][kubebuilder-repo], a comprehensive development kit for rapidly building and publishing Kubernetes APIs and Controllers using CRDs. Kubebuilder scaffolds projects and API definitions and is built on top of the [controller-runtime][controller-runtime] libraries.
+
+### Why Kubebuilder and Kubernetes APIs?
+Applications and cluster resources typically require some operational work - whether it is replacing failed replicas with new ones, or scaling replica counts while resharding data. Running the MySQL application may require scheduling backups, reconfiguring replicas after scaling, setting up failure detection and remediation, etc.
+
+With the Kubernetes API model, management logic is embedded directly into an application specific Kubernetes API, e.g. a “MySQL” API. Users then declaratively manage the application through YAML configuration using tools such as kubectl, just like they do for Kubernetes objects. This approach is referred to as an Application Controller, also known as an Operator. Controllers are a powerful technique backing the core Kubernetes APIs that may be used to build many kinds of solutions in addition to Applications; such as Autoscalers, Workload APIs, Configuration APIs, CI/CD systems, and more.
+
+However, while it has been possible for trailblazers to build new Controllers on top of the raw API machinery, doing so has been a DIY “from scratch” experience, requiring developers to learn low level details about how Kubernetes libraries are implemented, handwrite boilerplate code, and warp their own solutions for integration testing, RBAC configuration, documentation, etc. Kubebuilder makes this experience simple and easy by applying the lessons learned from building the core Kubernetes APIs.
+
+### Getting Started Building Application Controllers and Kubernetes APIs
+
+By providing an opinionated and structured solution for creating Controllers and Kubernetes APIs, developers have a working “out of the box” experience that uses the lessons and best practices learned from developing the core Kubernetes APIs. Creating a new "Hello World" Controller with `kubebuilder` is as simple as:
+
+ 1. Create a project with `kubebuilder init`
+ 2. Define a new API with `kubebuilder create api`
+ 3. Build and run the provided main function with `make install & make run`
+
+This will scaffold the API and Controller for users to modify, as well as scaffold integration tests, RBAC rules, Dockerfiles, Makefiles, etc.
+After adding their implementation to the project, users create the artifacts to publish their API through:
+
+ 1. Build and push the container image from the provided Dockerfile using `make docker-build`` and `make docker-push` commands
+ 2. Deploy the API using `make deploy` command
+
+Whether you are already a Controller aficionado or just want to learn what the buzz is about, check out the [kubebuilder repo][kubebuilder-repo] or take a look at an example in the [kubebuilder book][kubebuilder-book] to learn about how simple and easy it is to build Controllers.
+
+### Get Involved
+Kubebuilder is a project under [SIG API Machinery][SIG-APIMachinery] and is being actively developed by contributors from many companies such as Google, Red Hat, VMware, Huawei and others. Get involved by giving us feedback through these channels:
+
+ - Kubebuilder [chat room on Slack][slack-channel]
+ - SIG [mailing list][mailing-list]
+ - [Github issues][open-an-issue]
+ - Send a pull request in the [kubebuilder repo][kubebuilder-repo]
diff --git a/content/en/case-studies/_index.html b/content/en/case-studies/_index.html
index b44baa3a8..ff6696ac2 100644
--- a/content/en/case-studies/_index.html
+++ b/content/en/case-studies/_index.html
@@ -12,6 +12,20 @@
+
+
+
"I would almost be so bold as to say that most of these applications that we are building now would not have been possible without the cloud native patterns and the flexibility that Kubernetes enables."
"The big cloud native promise to our business is the ability to go from idea to production within 48 hours. We are some years away from this, but that’s quite feasible to us."
"We are in the position to run things at scale, in a public cloud environment, and test things out in way that a lot of people might not be able to do."
"The big cloud native promise to our business is the ability to go from idea to production within 48 hours. We are some years away from this, but that’s quite feasible to us."
+
diff --git a/content/en/case-studies/slingtv.html b/content/en/case-studies/slingtv.html
new file mode 100644
index 000000000..6cef810b4
--- /dev/null
+++ b/content/en/case-studies/slingtv.html
@@ -0,0 +1,105 @@
+---
+title: SlingTV Case Study
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+---
+
+
+
+
CASE STUDY:
Sling TV: Marrying Kubernetes and AI to Enable Proper Web Scale
+
+
+
+
+
+
+ Company Sling TV Location Englewood, Colorado Industry Streaming television
+
+
+
+
+
+
+
Challenge
+ Launched by DISH Network in 2015, Sling TV experienced great customer growth from the beginning. After just a year, “we were going through some growing pains of some of the legacy systems and trying to find the right architecture to enable our future,” says Brad Linder, Sling TV’s Cloud Native & Big Data Evangelist. The company has particular challenges: “We take live TV and distribute it over the internet out to a user’s device that we do not control,” says Linder. “In a lot of ways, we are working in the Wild West: The internet is what it is going to be, and if a customer’s service does not work for whatever reason, they do not care why. They just want things to work. Those are the variables of the equation that we have to try to solve. We really have to try to enable optionality and good customer experience at web scale.”
+
+
+
+
Solution
+ Led by the belief that “the cloud native architectures and patterns really give us a lot of flexibility in meeting the needs of that sort of customer base,” Linder partnered with Rancher Labs to build Sling TV’s next-generation platform around Kubernetes. “We are going to need to enable a hybrid cloud strategy including multiple public clouds and an on-premise VMWare multi data center environment to meet the needs of the business at some point, so getting that sort of abstraction was a real goal,” he says. “That is one of the biggest reasons why we picked Kubernetes.” The team launched its first applications on Kubernetes in Sling TV’s two internal data centers. The push to enable AWS as a data center option is underway and should be available by the end of 2018. The team has added Prometheus for monitoring and Jaeger for tracing, to work alongside the company’s existing tool sets: Zenoss, New Relic and ELK.
+
+
+
+
+
+
Impact
+ “We are getting to the place where we can one-click deploy an entire data center – the compute, network, Kubernetes, logging, monitoring and all the apps,” says Linder. “We have really enabled a platform thinking based approach to allowing applications to consume common tools. A new application can be onboarded in about an hour using common tooling and CI/CD processes. The gains on that side have been huge. Before, it took at least a few days to get things sorted for a new application to deploy. That does not consider the training of our operations staff to manage this new application. It is two or three orders of magnitude of savings in time and cost, and operationally it has given us the opportunity to let a core team of talented operations engineers manage common infrastructure and tooling to make our applications available at web scale.”
+
+
+
+
+
+
+
+
+ “I would almost be so bold as to say that most of these applications that we are building now would not have been possible without the cloud native patterns and the flexibility that Kubernetes enables.”
— Brad Linder, Cloud Native & Big Data Evangelist for Sling TV
+
+
+
+
+
+
The beauty of streaming television, like the service offered by Sling TV, is that you can watch it from any device you want, wherever you want.
Of course, from the provider side of things, that creates a particular set of challenges
+“We take live TV and distribute it over the internet out to a user’s device that we do not control,” says Brad Linder, Sling TV’s Cloud Native & Big Data Evangelist. “In a lot of ways, we are working in the Wild West: The internet is what it is going to be, and if a customer’s service does not work for whatever reason, they do not care why. They just want things to work. Those are the variables of the equation that we have to try to solve. We really have to try to enable optionality and we have to do it at web scale.”
+Indeed, Sling TV experienced great customer growth from the beginning of its launch by DISH Network in 2015. After just a year, “we were going through some growing pains of some of the legacy systems and trying to find the right architecture to enable our future,” says Linder. Tasked with building a next-generation web scale platform for the “personalized customer experience,” Linder has spent the past year bringing Kubernetes to Sling TV.
+Led by the belief that “the cloud native architectures and patterns really give us a lot of flexibility in meeting the needs of our customers,” Linder partnered with Rancher Labs to build the platform around Kubernetes. “They have really helped us get our head around how to use Kubernetes,” he says. “We needed the flexibility to enable our use case versus just a simple orchestrater. Enabling our future in a way that did not give us vendor lock-in was also a key part of our strategy. I think that is part of the Rancher value proposition.”
+
+
+
+
+
+
+ “We needed the flexibility to enable our use case versus just a simple orchestrater. Enabling our future in a way that did not give us vendor lock-in was also a key part of our strategy. I think that is part of the Rancher value proposition.”
— Brad Linder, Cloud Native & Big Data Evangelist for Sling TV
+
+
+
+
+
+One big reason he chose Kubernetes was getting a level of abstraction that would enable the company to “enable a hybrid cloud strategy including multiple public clouds and an on-premise VMWare multi data center environment to meet the needs of the business,” he says. Another factor was how much the Kubernetes ecosystem has matured over the past couple of years. “We have spent a lot of time and energy around making logging, monitoring and alerting production ready to give us insights into applications’ well-being,” says Linder. The team has added Prometheus for monitoring and Jaeger for tracing, to work alongside the company’s existing tool sets: Zenoss, New Relic and ELK.
+With the emphasis on common tooling, “We are getting to the place where we can one-click deploy an entire data center – the compute, network, Kubernetes, logging, monitoring and all the apps,” says Linder. “We have really enabled a platform thinking based approach to allowing applications to consume common tools and services. A new application can be onboarded in about an hour using common tooling and CI/CD processes. The gains on that side have been huge. Before, it took at least a few days to get things sorted for a new application to deploy. That does not consider the training of our operations staff to manage this new application. It is two or three orders of magnitude of savings in time and cost, and operationally it has given us the opportunity to let a core team of talented operations engineers manage common infrastructure and tooling to make our applications available at web scale.”
+
+
+
+
+
+“We have to be able to react to changes and hiccups in the matrix. It is the foundation for our ability to deliver a high-quality service for our customers."
— Brad Linder, Cloud Native & Big Data Evangelist for Sling TV
+
+
+
+
+
+
+ The team launched its first applications on Kubernetes in Sling TV’s two internal data centers in the early part of Q1 2018 and began to enable AWS as a data center option. The company plans to expand into other public clouds in the future.
+The first application that went into production is a web socket-based back-end notification service. “It allows back-end changes to trigger messages to our clients in the field without the polling,” says Linder. “We are talking about very high volumes of messages with this application. Without something like Kubernetes to be able to scale up and down, as well as just support that overall workload, that is pretty hard to do. I would almost be so bold as to say that most of these applications that we are building now would not have been possible without the cloud native patterns and the flexibility that Kubernetes enables.”
+ Linder oversees three teams working together on building the next-generation platform: a platform engineering team; an enterprise middleware services team; and a big data and analytics team. “We have really tried to bring everything together to be able to have a client application interact with a cloud native middleware layer. That middleware layer must run on a platform, consume platform services and then have logs and events monitored by an artificial agent to keep things running smoothly,” says Linder.
+
+
+
+
+
+
+ This undertaking is about “trying to marry Kubernetes with AI to enable web scale that just works".
— BRAD LINDER, CLOUD NATIVE & BIG DATA EVANGELIST FOR SLING TV
+
+
+
+
+ Ultimately, this undertaking is about “trying to marry Kubernetes with AI to enable web scale that just works,” he adds. “We want the artificial agents and the big data platform using the actual logs and events coming out of the applications, Kubernetes, the infrastructure, backing services and changes to the environment to make decisions like, ‘Hey we need more capacity for this service so please add more nodes.’ From a platform perspective, if you are truly doing web scale stuff and you are not using AI and big data, in my opinion, you are going to implode under your own weight. It is not a question of if, it is when. If you are in a ‘millions of users’ sort of environment, that implosion is going to be catastrophic. We are on our way to this goal and have learned a lot along the way.”
+For Sling TV, moving to cloud native has been exactly what they needed. “We have to be able to react to changes and hiccups in the matrix,” says Linder. “It is the foundation for our ability to deliver a high-quality service for our customers. Building intelligent platforms, tools and clients in the field consuming those services has got to be part of all of this. In my eyes that is a big part of what cloud native is all about. It is taking these distributed, potentially unreliable entities and enabling a robust customer experience they expect.”
+
+
+
+
+
+
+
+
diff --git a/content/en/case-studies/workiva.html b/content/en/case-studies/workiva.html
new file mode 100644
index 000000000..6db4c3ecf
--- /dev/null
+++ b/content/en/case-studies/workiva.html
@@ -0,0 +1,107 @@
+---
+title: Workiva Case Study
+case_study_styles: true
+cid: caseStudies
+css: /css/style_case_studies.css
+---
+
+
+
CASE STUDY:
Using OpenTracing to Help Pinpoint the Bottlenecks
+
+
+
+
+
+
+ Company Workiva Location Ames, Iowa Industry Enterprise Software
+
+
+
+
+
+
+
Challenge
+ Workiva offers a cloud-based platform for managing and reporting business data. This SaaS product, Wdesk, is used by more than 70 percent of the Fortune 500 companies. As the company made the shift from a monolith to a more distributed, microservice-based system, "We had a number of people working on this, all on different teams, so we needed to identify what the issues were and where the bottlenecks were," says Senior Software Architect MacLeod Broad. With back-end code running on Google App Engine, Google Compute Engine, as well as Amazon Web Services, Workiva needed a tracing system that was agnostic of platform. While preparing one of the company’s first products utilizing AWS, which involved a "sync and link" feature that linked data from spreadsheets built in the new application with documents created in the old application on Workiva’s existing system, Broad’s team found an ideal use case for tracing: There were circular dependencies, and optimizations often turned out to be micro-optimizations that didn’t impact overall speed.
+
+
+
+
+
+
+
Solution
+ Broad’s team introduced the platform-agnostic distributed tracing system OpenTracing to help them pinpoint the bottlenecks.
+
+
Impact
+ Now used throughout the company, OpenTracing produced immediate results. Software Engineer Michael Davis reports: "Tracing has given us immediate, actionable insight into how to improve our service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix."
+
+
+
+
+
+
+
+"With OpenTracing, my team was able to look at a trace and make optimization suggestions to another team without ever looking at their code." — MacLeod Broad, Senior Software Architect at Workiva
+
+
+
+
+
+
Last fall, MacLeod Broad’s platform team at Workiva was prepping one of the company’s first products utilizing Amazon Web Services when they ran into a roadblock.
+ Early on, Workiva’s backend had run mostly on Google App Engine. But things changed along the way as Workiva’s SaaS offering, Wdesk, a cloud-based platform for managing and reporting business data, grew its customer base to more than 70 percent of the Fortune 500 companies. "As customer needs grew and the product offering expanded, we started to leverage a wider offering of services such as Amazon Web Services as well as other Google Cloud Platform services, creating a multi-vendor environment."
+With this new product, there was a "sync and link" feature by which data "went through a whole host of services starting with the new spreadsheet system [Amazon Aurora] into what we called our linking system, and then pushed through http to our existing system, and then a number of calculations would go on, and the results would be transmitted back into the new system," says Broad. "We were trying to optimize that for speed. We thought we had made this great optimization and then it would turn out to be a micro optimization, which didn’t really affect the overall speed of things."
+The challenges faced by Broad’s team may sound familiar to other companies that have also made the shift from monoliths to more distributed, microservice-based systems. "We had a number of people working on this, all on different teams, so it was difficult to get our head around what the issues were and where the bottlenecks were," says Broad.
+ "Each service team was going through different iterations of their architecture and it was very hard to follow what was actually going on in each teams’ system," he adds. "We had circular dependencies where we’d have three or four different service teams unsure of where the issues really were, requiring a lot of back and forth communication. So we wasted a lot of time saying, ‘What part of this is slow? Which part of this is sometimes slow depending on the use case? Which part is degrading over time? Which part of this process is asynchronous so it doesn’t really matter if it’s long-running or not? What are we doing that’s redundant, and which part of this is buggy?’"
+
+
+
+
+
+
+ "A tracing system can at a glance explain an architecture, narrow down a performance bottleneck and zero in on it, and generally just help direct an investigation at a high level. Being able to do that at a glance is much faster than at a meeting or with three days of debugging, and it’s a lot faster than never figuring out the problem and just moving on." — MACLEOD BROAD, SENIOR SOFTWARE ARCHITECT AT WORKIVA
+
+
+
+
+
+Simply put, it was an ideal use case for tracing. "A tracing system can at a glance explain an architecture, narrow down a performance bottleneck and zero in on it, and generally just help direct an investigation at a high level," says Broad. "Being able to do that at a glance is much faster than at a meeting or with three days of debugging, and it’s a lot faster than never figuring out the problem and just moving on."
+With Workiva’s back-end code running on Google Compute Engine as well as App Engine and AWS, Broad knew that he needed a tracing system that was platform agnostic. "We were looking at different tracing solutions," he says, "and we decided that because it seemed to be a very evolving market, we didn’t want to get stuck with one vendor. So OpenTracing seemed like the cleanest way to avoid vendor lock-in on what backend we actually had to use."
+Once they introduced OpenTracing into this first use case, Broad says, "The trace made it super obvious where the bottlenecks were." Even though everyone had assumed it was Workiva’s existing code that was slowing things down, that wasn’t exactly the case. "It looked like the existing code was slow only because it was reaching out to our next-generation services, and they were taking a very long time to service all those requests," says Broad. "On the waterfall graph you can see the exact same work being done on every request when it was calling back in. So every service request would look the exact same for every response being paged out. And then it was just a no-brainer of, ‘Why is it doing all this work again?’"
+Using the insight OpenTracing gave them, "My team was able to look at a trace and make optimization suggestions to another team without ever looking at their code," says Broad. "The way we named our traces gave us insight whether it’s doing a SQL call or it’s making an RPC. And so it was really easy to say, ‘OK, we know that it’s going to page through all these requests. Do the work once and stuff it in cache.’ And we were done basically. All those calls became sub-second calls immediately."
+
+
+
+
+
+
+
+"We were looking at different tracing solutions and we decided that because it seemed to be a very evolving market, we didn’t want to get stuck with one vendor. So OpenTracing seemed like the cleanest way to avoid vendor lock-in on what backend we actually had to use." — MACLEOD BROAD, SENIOR SOFTWARE ARCHITECT AT WORKIVA
+
+
+
+
+
+ After the success of the first use case, everyone involved in the trial went back and fully instrumented their products. Tracing was added to a few more use cases. "We wanted to get through the initial implementation pains early without bringing the whole department along for the ride," says Broad. "Now, a lot of teams add it when they’re starting up a new service. We’re really pushing adoption now more than we were before."
+Some teams were won over quickly. "Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service," says Software Engineer Michael Davis. "Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix."
+Most of Workiva’s major products are now traced using OpenTracing, with data pushed into Google StackDriver. Even the products that aren’t fully traced have some components and libraries that are.
+Broad points out that because some of the engineers were working on App Engine and already had experience with the platform’s Appstats library for profiling performance, it didn’t take much to get them used to using OpenTracing. But others were a little more reluctant. "The biggest hindrance to adoption I think has been the concern about how much latency is introducing tracing [and StackDriver] going to cost," he says. "People are also very concerned about adding middleware to whatever they’re working on. Questions about passing the context around and how that’s done were common. A lot of our Go developers were fine with it, because they were already doing that in one form or another. Our Java developers were not super keen on doing that because they’d used other systems that didn’t require that."
+But the benefits clearly outweighed the concerns, and today, Workiva’s official policy is to use tracing."
+In fact, Broad believes that tracing naturally fits in with Workiva’s existing logging and metrics systems. "This was the way we presented it internally, and also the way we designed our use," he says. "Our traces are logged in the exact same mechanism as our app metric and logging data, and they get pushed the exact same way. So we treat all that data exactly the same when it’s being created and when it’s being recorded. We have one internal library that we use for logging, telemetry, analytics and tracing."
+
+
+
+
+
+
+ "Tracing has given us immediate, actionable insight into how to improve our [Workspaces] service. Through a combination of seeing where each call spends its time, as well as which calls are most often used, we were able to reduce our average response time by 95 percent (from 600ms to 30ms) in a single fix." — Michael Davis, Software Engineer, Workiva
+
+
+
+
+ For Workiva, OpenTracing has become an essential tool for zeroing in on optimizations and determining what’s actually a micro-optimization by observing usage patterns. "On some projects we often assume what the customer is doing, and we optimize for these crazy scale cases that we hit 1 percent of the time," says Broad. "It’s been really helpful to be able to say, ‘OK, we’re adding 100 milliseconds on every request that does X, and we only need to add that 100 milliseconds if it’s the worst of the worst case, which only happens one out of a thousand requests or one out of a million requests."
+Unlike many other companies, Workiva also traces the client side. "For us, the user experience is important—it doesn’t matter if the RPC takes 100 milliseconds if it still takes 5 seconds to do the rendering to show it in the browser," says Broad. "So for us, those client times are important. We trace it to see what parts of loading take a long time. We’re in the middle of working on a definition of what is ‘loaded.’ Is it when you have it, or when it’s rendered, or when you can interact with it? Those are things we’re planning to use tracing for to keep an eye on and to better understand."
+That also requires adjusting for differences in external and internal clocks. "Before time correcting, it was horrible; our traces were more misleading than anything," says Broad. "So we decided that we would return a timestamp on the response headers, and then have the client reorient its time based on that—not change its internal clock but just calculate the offset on the response time to when the client got it. And if you end up in an impossible situation where a client RPC spans 210 milliseconds but the time on the response time is outside of that window, then we have to reorient that."
+Broad is excited about the impact OpenTracing has already had on the company, and is also looking ahead to what else the technology can enable. One possibility is using tracing to update documentation in real time. "Keeping documentation up to date with reality is a big challenge," he says. "Say, we just ran a trace simulation or we just ran a smoke test on this new deploy, and the architecture doesn’t match the documentation. We can find whose responsibility it is and let them know and have them update it. That’s one of the places I’d like to get in the future with tracing."
+
+
+
+
diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md
index f672eb38d..004fca8dc 100644
--- a/content/en/docs/concepts/architecture/nodes.md
+++ b/content/en/docs/concepts/architecture/nodes.md
@@ -12,7 +12,7 @@ weight: 10
A `node` is a worker machine in Kubernetes, previously known as a `minion`. A node
may be a VM or physical machine, depending on the cluster. Each node has
the services necessary to run [pods](/docs/concepts/workloads/pods/pod/) and is managed by the master
-components. The services on a node include Docker, kubelet and kube-proxy. See
+components. The services on a node include the [container runtime](/docs/concepts/overview/components/#node-components), kubelet and kube-proxy. See
[The Kubernetes Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) section in the
architecture design doc for more details.
@@ -254,8 +254,7 @@ capacity when adding a node.
The Kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
checks that the sum of the requests of containers on the node is no greater than the node capacity. It
-includes all containers started by the kubelet, but not containers started directly by Docker nor
-processes not in containers.
+includes all containers started by the kubelet, but not containers started directly by the [container runtime](/docs/concepts/overview/components/#node-components) nor any process running outside of the containers.
If you want to explicitly reserve resources for non-pod processes, you can create a placeholder
pod. Use the following template:
diff --git a/content/en/docs/concepts/cluster-administration/addons.md b/content/en/docs/concepts/cluster-administration/addons.md
index b8a70ecc0..fc6bf0b6c 100644
--- a/content/en/docs/concepts/cluster-administration/addons.md
+++ b/content/en/docs/concepts/cluster-administration/addons.md
@@ -28,6 +28,7 @@ Add-ons in each section are sorted alphabetically - the ordering does not imply
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave.
* [Contiv](http://contiv.github.io) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](http://github.com/contiv). The [installer](http://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) is an overlay network provider that can be used with Kubernetes.
+* [Knitter](https://github.com/ZTE/Knitter/) is a network solution supporting multiple networking in Kubernetes.
* [Multus](https://github.com/Intel-Corp/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and Openshift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
diff --git a/content/en/docs/concepts/cluster-administration/networking.md b/content/en/docs/concepts/cluster-administration/networking.md
index 6d0dfc6af..7ee386175 100644
--- a/content/en/docs/concepts/cluster-administration/networking.md
+++ b/content/en/docs/concepts/cluster-administration/networking.md
@@ -201,6 +201,10 @@ sysctl net.ipv4.ip_forward=1
The result of all this is that all `Pods` can reach each other and can egress
traffic to the internet.
+### Knitter
+
+[Knitter](https://github.com/ZTE/Knitter/) is a network solution which supports multiple networking in Kubernetes. It provides the ability of tenant management and network management. Knitter includes a set of end-to-end NFV container networking solutions besides multiple network planes, such as keeping IP address for applications, IP address migration, etc.
+
### Kube-router
[Kube-router](https://github.com/cloudnativelabs/kube-router) is a purpose-built networking solution for Kubernetes that aims to provide high performance and operational simplicity. Kube-router provides a Linux [LVS/IPVS](http://www.linuxvirtualserver.org/software/ipvs.html)-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
diff --git a/content/en/docs/concepts/configuration/manage-compute-resources-container.md b/content/en/docs/concepts/configuration/manage-compute-resources-container.md
index 45d098b5e..fb091fa20 100644
--- a/content/en/docs/concepts/configuration/manage-compute-resources-container.md
+++ b/content/en/docs/concepts/configuration/manage-compute-resources-container.md
@@ -308,7 +308,7 @@ You can see that the Container was terminated because of `reason:OOM Killed`, wh
## Local ephemeral storage
{{< feature-state state="beta" >}}
-Kubernetes version 1.8 introduces a new resource, _ephemeral-storage_ for managing local ephemeral storage. In each Kubernetes node, kubelet's root directory (/var/lib/kubelet by default) and log directory (/var/log) are stored on the root partition of the node. This partition is also shared and consumed by pods via EmptyDir volumes, container logs, image layers and container writable layers.
+Kubernetes version 1.8 introduces a new resource, _ephemeral-storage_ for managing local ephemeral storage. In each Kubernetes node, kubelet's root directory (/var/lib/kubelet by default) and log directory (/var/log) are stored on the root partition of the node. This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers.
This partition is “ephemeral” and applications cannot expect any performance SLAs (Disk IOPS for example) from this partition. Local ephemeral storage management only applies for the root partition; the optional partition for image layer and writable layer is out of scope.
@@ -366,11 +366,11 @@ run on. Each node has a maximum amount of local ephemeral storage it can provide
### How Pods with ephemeral-storage limits run
-For container-level isolation, if a Container's writable layer and logs usage exceeds its storage limit, the pod will be evicted. For pod-level isolation, if the sum of the local ephemeral storage usage from all containers and also the pod's EmptyDir volumes exceeds the limit, the pod will be evicted.
+For container-level isolation, if a Container's writable layer and logs usage exceeds its storage limit, the Pod will be evicted. For pod-level isolation, if the sum of the local ephemeral storage usage from all containers and also the Pod's emptyDir volumes exceeds the limit, the Pod will be evicted.
-## Extended Resources
+## Extended resources
-Extended Resources are fully-qualified resource names outside the
+Extended resources are fully-qualified resource names outside the
`kubernetes.io` domain. They allow cluster operators to advertise and users to
consume the non-Kubernetes-built-in resources.
@@ -397,7 +397,7 @@ operation, the node's `status.capacity` will include a new resource. The
`status.allocatable` field is updated automatically with the new resource
asynchronously by the kubelet. Note that because the scheduler uses the node
`status.allocatable` value when evaluating Pod fitness, there may be a short
-delay between patching the node capacity with a new resource and the first pod
+delay between patching the node capacity with a new resource and the first Pod
that requests the resource to be scheduled on that node.
**Example:**
@@ -423,7 +423,7 @@ JSON-Pointer. For more details, see
#### Cluster-level extended resources
Cluster-level extended resources are not tied to nodes. They are usually managed
-by scheduler extenders, which handle the resource comsumption, quota and so on.
+by scheduler extenders, which handle the resource consumption and resource quota.
You can specify the extended resources that are handled by scheduler extenders
in [scheduler policy
@@ -432,12 +432,13 @@ configuration](https://github.com/kubernetes/kubernetes/blob/release-1.10/pkg/sc
**Example:**
The following configuration for a scheduler policy indicates that the
-cluster-level extended resource "example.com/foo" is handled by scheduler
+cluster-level extended resource "example.com/foo" is handled by the scheduler
extender.
- - The scheduler sends a pod to the scheduler extender only if the pod requests
- "example.com/foo".
- - The `ignoredByScheduler` field specifies that the scheduler does not check
- the "example.com/foo" resource in its `PodFitsResources` predicate.
+
+- The scheduler sends a Pod to the scheduler extender only if the Pod requests
+ "example.com/foo".
+- The `ignoredByScheduler` field specifies that the scheduler does not check
+ the "example.com/foo" resource in its `PodFitsResources` predicate.
```json
{
@@ -460,20 +461,20 @@ extender.
### Consuming extended resources
-Users can consume Extended Resources in Pod specs just like CPU and memory.
+Users can consume extended resources in Pod specs just like CPU and memory.
The scheduler takes care of the resource accounting so that no more than the
available amount is simultaneously allocated to Pods.
-The API server restricts quantities of Extended Resources to whole numbers.
+The API server restricts quantities of extended resources to whole numbers.
Examples of _valid_ quantities are `3`, `3000m` and `3Ki`. Examples of
_invalid_ quantities are `0.5` and `1500m`.
{{< note >}}
-**Note:** Extended Resources replace Opaque Integer Resources.
-Users can use any domain name prefix other than "`kubernetes.io`" which is reserved.
+**Note:** Extended resources replace Opaque Integer Resources.
+Users can use any domain name prefix other than `kubernetes.io` which is reserved.
{{< /note >}}
-To consume an Extended Resource in a Pod, include the resource name as a key
+To consume an extended resource in a Pod, include the resource name as a key
in the `spec.containers[].resources.limits` map in the container spec.
{{< note >}}
@@ -482,7 +483,7 @@ must be equal if both are present in a container spec.
{{< /note >}}
A Pod is scheduled only if all of the resource requests are satisfied, including
-CPU, memory and any Extended Resources. The Pod remains in the `PENDING` state
+CPU, memory and any extended resources. The Pod remains in the `PENDING` state
as long as the resource request cannot be satisfied.
**Example:**
@@ -533,11 +534,11 @@ consistency across providers and platforms.
{{% capture whatsnext %}}
-* Get hands-on experience [assigning Memory resources to containers and pods](/docs/tasks/configure-pod-container/assign-memory-resource/).
+* Get hands-on experience [assigning Memory resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/).
-* Get hands-on experience [assigning CPU resources to containers and pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
+* Get hands-on experience [assigning CPU resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
-* [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
+* [Container API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
* [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core)
diff --git a/content/en/docs/concepts/configuration/overview.md b/content/en/docs/concepts/configuration/overview.md
index bf8269dc5..17926fa5f 100644
--- a/content/en/docs/concepts/configuration/overview.md
+++ b/content/en/docs/concepts/configuration/overview.md
@@ -52,7 +52,7 @@ This is a living document. If you think of something that is not on this list bu
If you only need access to the port for debugging purposes, you can use the [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls) or [`kubectl port-forward`](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
- If you explicitly need to expose a Pod's port on the node, consider using a [NodePort](/docs/concepts/services-networking/service/#type-nodeport) Service before resorting to `hostPort`.
+ If you explicitly need to expose a Pod's port on the node, consider using a [NodePort](/docs/concepts/services-networking/service/#nodeport) Service before resorting to `hostPort`.
- Avoid using `hostNetwork`, for the same reasons as `hostPort`.
diff --git a/content/en/docs/concepts/configuration/pod-priority-preemption.md b/content/en/docs/concepts/configuration/pod-priority-preemption.md
index 68572920a..e1f19c658 100644
--- a/content/en/docs/concepts/configuration/pod-priority-preemption.md
+++ b/content/en/docs/concepts/configuration/pod-priority-preemption.md
@@ -368,4 +368,29 @@ When multiple nodes exist for preemption and none of the above scenarios apply,
we expect the scheduler to choose a node with the lowest priority. If that is
not the case, it may indicate a bug in the scheduler.
+## Interactions of Pod priority and QoS
+
+Pod priority and
+[QoS](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md)
+are two orthogonal features with few interactions and no default restrictions on
+setting the priority of a Pod based on its QoS classes. The scheduler's
+preemption logic does consider QoS when choosing preemption targets. Preemption
+considers Pod priority and attempts to choose a set of targets with the lowest
+priority. Higher-priority Pods are considered for preemption only if the removal
+of the lowest priority Pods is not sufficient to allow the scheduler to schedule
+the preemptor Pod, or if the lowest priority Pods are protected by
+`PodDisruptionBudget`.
+
+The only component that considers both QoS and Pod priority is
+[Kubelet out-of-resource eviction](/docs/tasks/administer-cluster/out-of-resource/).
+The kubelet ranks Pods for eviction first by whether or not their usage of the
+starved resource exceeds requests, then by Priority, and then by the consumption
+of the starved compute resource relative to the Pods’ scheduling requests.
+See
+[Evicting end-user pods](/docs/tasks/administer-cluster/out-of-resource/#evicting-end-user-pods)
+for more details. Kubelet out-of-resource eviction does not evict Pods whose
+usage does not exceed their requests. If a Pod with lower priority is not
+exceeding its requests, it won't be evicted. Another Pod with higher priority
+that exceeds its requests may be evicted.
+
{{% /capture %}}
diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md
index 0300f8ec6..5a2477b23 100644
--- a/content/en/docs/concepts/services-networking/ingress.md
+++ b/content/en/docs/concepts/services-networking/ingress.md
@@ -275,7 +275,7 @@ that it applies to all Ingress, such as the load balancing algorithm, backend
weight scheme, and others. More advanced load balancing concepts
(e.g. persistent sessions, dynamic weights) are not yet exposed through the
Ingress. You can still get these features through the
-[service loadbalancer](https://github.com/kubernetes/ingress-nginx/blob/master/docs/ingress-controller-catalog.md).
+[service loadbalancer](https://github.com/kubernetes/ingress-nginx).
With time, we plan to distill load balancing patterns that are applicable
cross platform into the Ingress resource.
@@ -353,8 +353,8 @@ Please track the [L7 and Ingress proposal](https://github.com/kubernetes/kuberne
You can expose a Service in multiple ways that don't directly involve the Ingress resource:
-* Use [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#type-loadbalancer)
-* Use [Service.Type=NodePort](/docs/concepts/services-networking/service/#type-nodeport)
+* Use [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer)
+* Use [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport)
* Use a [Port Proxy](https://git.k8s.io/contrib/for-demos/proxy-to-service)
{{% /capture %}}
diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md
index 5f82137d3..88a8fb9b4 100644
--- a/content/en/docs/concepts/services-networking/service.md
+++ b/content/en/docs/concepts/services-networking/service.md
@@ -147,7 +147,7 @@ than [`ExternalName`](#externalname).
In Kubernetes v1.0, `Services` are a "layer 4" (TCP/UDP over IP) construct, the
proxy was purely in userspace. In Kubernetes v1.1, the `Ingress` API was added
(beta) to represent "layer 7"(HTTP) services, iptables proxy was added too,
-and become the default operating mode since Kubernetes v1.2. In Kubernetes v1.8.0-beta.0,
+and became the default operating mode since Kubernetes v1.2. In Kubernetes v1.8.0-beta.0,
ipvs proxy was added.
### Proxy-mode: userspace
diff --git a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md
index ed9af77d0..85c052466 100644
--- a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md
+++ b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md
@@ -192,7 +192,7 @@ status:
The new Pod conditions must comply with Kubernetes [label key format](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set).
Since the `kubectl patch` command still doesn't support patching object status,
the new Pod conditions have to be injected through the `PATCH` action using
-one of the [KubeClient libraries](/docs/reference/using-api/client-librarie/).
+one of the [KubeClient libraries](/docs/reference/using-api/client-libraries/).
With the introduction of new Pod conditions, a Pod is evaluated to be ready **only**
when both the following statements are true:
diff --git a/content/en/docs/concepts/workloads/pods/pod.md b/content/en/docs/concepts/workloads/pods/pod.md
index c0451d5b7..147eb1f83 100644
--- a/content/en/docs/concepts/workloads/pods/pod.md
+++ b/content/en/docs/concepts/workloads/pods/pod.md
@@ -165,7 +165,7 @@ Pod is exposed as a primitive in order to facilitate:
## Termination of Pods
-Because pods represent running processes on nodes in the cluster, it is important to allow those processes to gracefully terminate when they are no longer needed (vs being violently killed with a KILL signal and having no chance to clean up). Users should be able to request deletion and know when processes terminate, but also be able to ensure that deletes eventually complete. When a user requests deletion of a pod the system records the intended grace period before the pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container. Once the grace period has expired the KILL signal is sent to those processes and the pod is then deleted from the API server. If the Kubelet or the container manager is restarted while waiting for processes to terminate, the termination will be retried with the full grace period.
+Because pods represent running processes on nodes in the cluster, it is important to allow those processes to gracefully terminate when they are no longer needed (vs being violently killed with a KILL signal and having no chance to clean up). Users should be able to request deletion and know when processes terminate, but also be able to ensure that deletes eventually complete. When a user requests deletion of a pod, the system records the intended grace period before the pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container. Once the grace period has expired, the KILL signal is sent to those processes, and the pod is then deleted from the API server. If the Kubelet or the container manager is restarted while waiting for processes to terminate, the termination will be retried with the full grace period.
An example flow:
diff --git a/content/en/docs/contribute/_index.md b/content/en/docs/contribute/_index.md
new file mode 100644
index 000000000..7171eac6f
--- /dev/null
+++ b/content/en/docs/contribute/_index.md
@@ -0,0 +1,73 @@
+---
+content_template: templates/concept
+title: Contribute to Kubernetes docs
+linktitle: Contribute
+main_menu: true
+weight: 80
+---
+
+{{% capture overview %}}
+
+If you would like to help contribute to the Kubernetes documentation or website,
+we're happy to have your help! Anyone can contribute, whether you're new to the
+project or you've been around a long time, and whether you self-identify as a
+developer, an end user, or someone who just can't stand seeing typos.
+
+Looking for the [style guide](/docs/contribute/style/style-guide/)?
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Types of contributor
+
+- A _member_ of the Kubernetes organization has [signed the CLA](/contribute/start#sign-the-cla)
+ and contributed some time and effort to the project. See
+ [Community membership](https://github.com/kubernetes/community/blob/master/community-membership.md)
+ for specific criteria for membership.
+- A SIG Docs _reviewer_ is a member of the Kubernetes organization who has
+ expressed interest in reviewing documentation pull requests and who has been
+ added to the appropriate Github group and `OWNERS` files in the Github
+ repository, by a SIG Docs Approver.
+- A SIG Docs _approver_ is a member in good standing who has shown a continued
+ commitment to the project and is granted the ability to merge pull requests
+ and thus to publish content on behalf of the Kubernetes organization.
+ Approvers can also represent SIG Docs in the larger Kubernetes community.
+ Some of the duties of a SIG Docs approver, such as coordinating a release,
+ require a significant time commitment.
+
+## Ways to contribute
+
+This list is divided into things anyone can do, things Kubernetes organization
+members can do, and things that require a higher level of access and familiarity
+with SIG Docs processes. Contributing consistently over time can help you
+understand some of the tooling and organizational decisions that have already
+been made.
+
+This is not an exhaustive list of ways you can contribute to the Kubernetes
+documentation, but it should help you get started.
+
+- [Anyone](/docs/contribute/start/)
+ - File actionable bugs
+- [Member](/docs/contribute/start/)
+ - Improve existing docs
+ - Bring up ideas for improvement on Slack or SIG docs mailing list
+ - Improve docs accessibility
+ - Provide non-binding feedback on PRs
+ - Write a blog post or case study
+- [Reviewer](/docs/contribute/intermediate/)
+ - Document new features
+ - Triage and categorize issues
+ - Review PRs
+ - Create diagrams, graphics assets, and embeddable screencasts / videos
+ - Localization
+ - Contribute to other repos as a docs representative
+ - Edit user-facing strings in code
+ - Improve code comments, Godoc
+- [Approver](/docs/contribute/advanced/)
+ - Publish contributor content by approving and merging PRs
+ - Participate in a Kubernetes release team as a docs representative
+ - Propose improvements to the style guide
+ - Propose improvements to docs tests
+ - Propose improvements to the Kubernetes website or other tooling
+
+{{% /capture %}}
diff --git a/content/en/docs/contribute/advanced.md b/content/en/docs/contribute/advanced.md
new file mode 100644
index 000000000..75ec3b6d5
--- /dev/null
+++ b/content/en/docs/contribute/advanced.md
@@ -0,0 +1,115 @@
+---
+title: Advanced contributing
+slug: advanced
+content_template: templates/concept
+weight: 30
+---
+
+{{% capture overview %}}
+
+This page assumes that you've read and mastered the
+[Start contributing](/docs/contribute/start/) and
+[Intermediate contributing](/docs/contribute/intermediate/) topics and are ready
+to learn about more ways to contribute. You need to use the Git command line
+client and other tools for some of these tasks.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Be the PR Wrangler for a week
+
+SIG Docs [approvers](/docs/contribute/participating/#approvers) can be PR
+wranglers.
+
+SIG Docs approvers are added to the
+[PR Wrangler rotation scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers)
+for weekly rotations. The PR wrangler's duties include:
+
+- Review incoming pull requests daily.
+ - Help new contributors sign the CLA, and close any PR where the CLA hasn't
+ been signed for two weeks. PR authors can reopen the PR after signing the
+ CLA, so this is a low-risk way to make sure nothing gets merged without a
+ signed CLA.
+ - Provide feedback on proposed changes, including helping facilitate technical
+ review from members of other SIGs.
+ - Merge PRs when they are ready, or close PRs that shouldn't be accepted.
+- Triage and tag incoming issues daily. See
+ [Intermediate contributing](/docs/contribute/intermediate/) for guidelines
+ about how SIG Docs uses metadata.
+
+## Propose improvements
+
+SIG Docs
+[members](/docs/contribute/participating/#members) can propose improvements.
+
+After you've been contributing to the Kubernetes documentation for a while, you
+may have ideas for improvement to the style guide, the toolchain used to build
+the documentation, the website style, the processes for reviewing and merging
+pull requests, or other aspects of the documentation. For maximum transparency,
+these types of proposals need to be discussed in a SIG Docs meeting or on the
+[kubernetes-sig-docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs).
+In addition, it can really help to have some context about the way things
+currently work and why past decisions have been made before proposing sweeping
+changes. The quickest way to get answers to questions about how the documentation
+currently works is to ask in the `#sig-docs` Slack channel on
+[kubernetes.slack.com](https://kubernetes.slack.com)
+
+After discussion has taken place and the sig is in agreement about the desired
+outcome, you can work on the proposed changes in the way that is the most
+appropriate. For instance, an update to the style guide or the website's
+functionality might involve opening a pull request, while a change related to
+documentation testing might involve working with sig-testing.
+
+## Coordinate docs for a Kubernetes release
+
+SIG Docs [approvers](/docs/contribute/participating/#approvers) can coordinate
+docs for a Kubernetes release.
+
+Each Kubernetes release is coordinated by a team of people participating in the
+sig-release special interest group (SIG). Others on the release team for a given
+release include an overall release lead, as well as representatives from sig-pm,
+sig-testing, and others. To find out more about Kubernetes release processes,
+refer to
+[https://github.com/kubernetes/sig-release](https://github.com/kubernetes/sig-release).
+
+The SIG Docs representative for a given release coordinates the following tasks:
+
+- Monitor the feature-tracking spreadsheet for new or changed features with an
+ impact on documentation. If documentation for a given feature won't be ready
+ for the release, the feature may not be allowed to go into the release.
+- Attend sig-release meetings regularly and give updates on the status of the
+ docs for the release.
+- Review and copyedit feature documentation drafted by the sig responsible for
+ implementing the feature.
+- Merge release-related pull requests and maintain the Git feature branch for
+ the release.
+- Mentor other SIG Docs contributors who want to learn how to do this role in
+ the future. This is known as "shadowing".
+- Publish the documentation changes related to the release when the release
+ artifacts are published.
+
+Coordinating a release is typically a 3-4 month commitment, and the duty is
+rotated among SIG Docs approvers.
+
+## Sponsor a new contributor
+
+SIG Docs [reviewers](/docs/contribute/participating/#reviewers) can sponsor
+new contributors.
+
+After a new contributor has successfully submitted 5 substantive pull requests
+to one or more Kubernetes repositiries, they are eligible to apply for
+[membership](/docs/contribute/participating#members) in the Kubernetes
+organization. The contributor's membership needs to be backed by two sponsors
+who are already reviewers.
+
+New docs contributors can request sponsors by asking in the #sig-docs channel
+on the [Kubernetes Slack instance](https://kubernetes.slack.com) or on the
+[SIG Docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs).
+If you feel confident about the applicant's work, you volunteer to sponsor them.
+When they submit their membership application, reply to the application with a
+"+1" and include details about why you think the applicant is a good fit for
+membership in the Kubernetes organization.
+
+{{% /capture %}}
+
diff --git a/content/en/docs/contribute/generate-ref-docs/_index.md b/content/en/docs/contribute/generate-ref-docs/_index.md
new file mode 100644
index 000000000..cf058d98f
--- /dev/null
+++ b/content/en/docs/contribute/generate-ref-docs/_index.md
@@ -0,0 +1,9 @@
+---
+title: Reference docs overview
+main_menu: true
+weight: 80
+---
+
+Much of the Kubernetes reference documentation is generated from Kubernetes
+source code, using scripts. The topics in this section document how to generate
+this type of content.
diff --git a/content/en/docs/home/contribute/generated-reference/federation-api.md b/content/en/docs/contribute/generate-ref-docs/federation-api.md
similarity index 100%
rename from content/en/docs/home/contribute/generated-reference/federation-api.md
rename to content/en/docs/contribute/generate-ref-docs/federation-api.md
diff --git a/content/en/docs/home/contribute/generated-reference/kubectl.md b/content/en/docs/contribute/generate-ref-docs/kubectl.md
similarity index 100%
rename from content/en/docs/home/contribute/generated-reference/kubectl.md
rename to content/en/docs/contribute/generate-ref-docs/kubectl.md
diff --git a/content/en/docs/home/contribute/generated-reference/kubernetes-api.md b/content/en/docs/contribute/generate-ref-docs/kubernetes-api.md
similarity index 100%
rename from content/en/docs/home/contribute/generated-reference/kubernetes-api.md
rename to content/en/docs/contribute/generate-ref-docs/kubernetes-api.md
diff --git a/content/en/docs/home/contribute/generated-reference/kubernetes-components.md b/content/en/docs/contribute/generate-ref-docs/kubernetes-components.md
similarity index 100%
rename from content/en/docs/home/contribute/generated-reference/kubernetes-components.md
rename to content/en/docs/contribute/generate-ref-docs/kubernetes-components.md
diff --git a/content/en/docs/contribute/intermediate.md b/content/en/docs/contribute/intermediate.md
new file mode 100644
index 000000000..3166dcca5
--- /dev/null
+++ b/content/en/docs/contribute/intermediate.md
@@ -0,0 +1,789 @@
+---
+title: Intermediate contributing
+slug: intermediate
+content_template: templates/concept
+weight: 20
+---
+
+{{% capture overview %}}
+
+This page assumes that you've read and mastered the tasks in the
+[start contributing](/docs/contribute/start/) topic and are ready to
+learn about more ways to contribute.
+
+{{< note >}}
+**Note:** Some tasks require you to use the Git command line client and other
+tools.
+{{< /note >}}
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+Now that you've gotten your feet wet and helped out with the Kubernetes docs in
+the ways outlined in the [start contributing](/docs/contribute/start/) topic,
+you may feel ready to do more. These tasks assume that you have, or are willing
+to gain, deeper knowledge of the following topic areas:
+
+- Kubernetes concepts
+- Kubernetes documentation workflows
+- Where and how to find information about upcoming Kubernetes features
+- Strong research skills in general
+
+These tasks are not as sequential as the beginner tasks. There is no expectation
+that one person does all of them all of the time.
+
+## Review pull requests
+
+In any given week, a specific docs approver volunteers to do initial triage
+and review of [pull requests and issues](#triage-and-categorize-issues). This
+person is the "PR Wrangler" for the week. The schedule is maintained using the
+[PR Wrangler scheduler(https://github.com/kubernetes/website/wiki/PR-Wranglers).
+To be added to this list, attend the weekly SIG Docs meeting and volunteer. Even
+if you are not on the schedule for the current week, you can still review pull
+requests (PRs) that are not already under active review.
+
+In addition to the rotation, an automated system comments on each new PR and
+suggests reviewers and approvers for the PR, based on the list of approvers and
+reviewers in the affected files. The PR author is expected to follow the
+guidance of the bot, and this also helps PRs to get reviewed quickly.
+
+We want to get pull requests (PRs) merged and published as quickly as possible.
+To ensure the docs are accurate and up to date, each PR needs to be reviewed by
+people who understand the content, as well as people with experience writing
+great documentation.
+
+Reviewers and approvers need to provide actionable and constructive feedback to
+keep contributors engaged and help them to improve. Sometimes helping a new
+contributor get their PR ready to merge takes more time than just rewriting it
+yourself, but the project is better in the long term when we have a diversity of
+active participants.
+
+Before you start reviewing PRs, make sure you are familiar with the
+[Documentation Style Guide](/docs/contribute/style/style-guide/)
+and the [code of conduct](/community/code-of-conduct/)
+
+### Find a PR to review
+
+To see all open PRs, go to the **Pull Requests** tab in the Github repository.
+A PR is eligible for review when it meets all of the following criteria:
+
+- Has the `cnf-cla:yes` tag
+- Does not have WIP in the description
+- Does not a have tag including the phrase `do-not-merge`
+- Has no merge conflicts
+- Is based against the correct branch (usually `master` unless the PR relates to
+ a feature that has not yet been released)
+- Is not being actively reviewed by another docs person (other technical
+ reviewers are fine), unless that person has explicitly asked for your help. In
+ particular, leaving lots of new comments after other review cycles have
+ already been completed on a PR can be discouraging and counter-productive.
+
+If a PR is not eligible to merge, leave a comment to let the author know about
+the problem and offer to help them fix it. If they've been informed and have not
+fixed the problem in several weeks or months, eventually their PR will be closed
+without merging.
+
+If you're new to reviewing, or you don't have a lot of bandwidth, look for PRs
+with the `size/XS` or `size/S` tag set. The size is automatically determined by
+the number of lines the PR changes.
+
+#### Reviewers and approvers
+
+The Kubernetes website repo operates differently than some of the Kubernetes
+code repositories when it comes to the roles of reviewers and approvers. For
+more information about the responsibilities of reviewers and approvers, see
+[Participating](/docs/contribute/participating/). Here's an overview.
+
+- A reviewer reviews pull request content for technical accuracy. A reviewer
+ indicates that a PR is technically accurate by leaving a `/lgtm` comment on
+ the PR.
+
+ {{< note >}}Don't add an `/lgtm` unless you are confident in the technical
+ accuracy of the documentation modified or introduced in the PR.{{< /note >}}
+
+- An approver reviews pull request content for docs quality and adherence to
+ SIG Docs guidelines, such as the
+ [style guide](/docs/contribute/style/style-guide). Only people listed as
+ approvers in the
+ [`OWNERS`](https://github.com/kubernetes/website/blob/master/OWNERS) file can
+ approve a PR. To approve a PR, leave an `/approved` comment on the PR.
+
+A PR is merged when it has both a `/lgtm` comment from anyone in the Kubernetes
+organization and an `/approved` comment from an approver in the
+`sig-docs-maintainers` group, as long as it is not on hold and the PR author
+has signed the CLA.
+
+### Review a PR
+
+1. Read the PR description and read any attached issues or links, if
+ applicable. "Drive-by reviewing" is sometimes more harmful than helpful, so
+ make sure you have the right knowledge to provide a meaningful review.
+
+2. If someone else is the best person to review this particular PR, let them
+ know by adding a comment with `/assign @`. If you have
+ asked a non-docs person for technical review but still want to review the PR
+ from a docs point of view, keep going.
+
+3. Go to the **Files changed** tab. Look over all the changed lines. Removed
+ content has a red background, and those lines also start with a `-` symbol.
+ Added content has a green background, and those lines also start with a `+`
+ symbol. Within a line, the actual modified content has a slightly darker
+ green background than the rest of the line.
+
+ - Especially if the PR uses tricky formatting or changes CSS, Javascript,
+ or other site-wide elements, you can preview the website with the PR
+ applied. Go to the **Conversation** tab and click the **Details** link
+ for the `deploy/netlify` test, near the bottom of the page. It opens in
+ the same browser window by default, so open it in a new window so you
+ don't lose your partial review. Switch back to the **Files changed** tab
+ to resume your review.
+ - Make sure the PR complies with the
+ [Documentation Style Guide](/docs/contribute/style/style-guide/)
+ and link the author to the relevant part of the style guide if not.
+ - If you have a question, comment, or other feedback about a given
+ change, hover over a line and click the blue-and-white `+` symbol that
+ appears. Type your comment and click **Start a review**.
+ - If you have more comments, leave them in the same way.
+ - By convention, if you see a small problem that does not have to do with
+ the main purpose of the PR, such as a typo or whitespace error, you can
+ call it out, prefixing your comment with `nit:` so that the author knows
+ you consider it trivial. They should still address it.
+ - When you're reviewed everything, or if you didn't have any comments, go
+ back to the top of the page and click **Review changes**. Choose either
+ **Comment** or **Request Changes**. Add a summary of your review, and
+ add appropriate
+ [Prow commands](https://prow.k8s.io/command-help) to separate lines in
+ the Review Summary field. SIG Docs follows the
+ [Kubernetes code review process](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md#the-code-review-process).
+ All of your comments will be sent to the PR author in a single
+ notification.
+
+ - If you think the PR is ready to be merged, add the text `/approve` to
+ your summary.
+ - If the PR does not need additional technical review, add the
+ text `/lgtm` as well.
+ - If the PR *does* need additional technical review, add the text
+ `/assign` with the Github username of the person who needs to
+ provide technical review. Look at the `reviewers` field in the
+ front-matter at the top of a given Markdown file to see who can
+ provide technical review.
+ - To prevent the PR from being merged, add `/hold`. This sets the
+ label `do-not-merge/hold`.
+ - If a PR has no conflicts and has the `lgtm` and `approved` label but
+ no `hold` label, it is merged automatically.
+ - If a PR has the `lgtm` and/or `approved` labels and new changes are
+ detected, these labels are removed automatically.
+
+ See
+ [the list of all available slash commands](https://prow.k8s.io/command-help)
+ that can be used in PRs.
+
+ - If you previously selected **Request changes** and the PR author has
+ addressed your concerns, you can change your review status either in the
+ **Files changed** tab or at the bottom of the **Conversation** tab. Be
+ sure to add the `/approve` tag and assign technical reviewers if necessary,
+ so that the PR can be merged.
+
+### Commit into another person's PR
+
+Leaving PR comments is helpful, but there may be times when you need to commit
+into another person's PR, rather than just leaving a review.
+
+Resist the urge to "take over" for another person unless they explicitly ask
+you to, or you want to resurrect a long-abandoned PR. While it may be faster
+in the short term, it deprives the person of the chance to contribute.
+
+The process you use depends on whether you need to edit a file that is already
+in the scope of the PR or a file that the PR has not yet touched.
+
+You can't commit into someone else's PR if either of the following things is
+true:
+
+- If the PR author pushed their branch directly to the
+ [https://github.com/kubernetes/website/](https://github.com/kubernetes/website/)
+ repository, only a reviewer with push access can commit into their PR.
+ Authors should be encouraged to push their branch to their fork before
+ opening the PR.
+- If the PR author explicitly disallowed edits from approvers, you can't
+ commit into their PR unless they change this setting.
+
+#### If the file is already changed by the PR
+
+This method uses the Github UI. If you prefer, you can use the command line
+even if the file you want to change is part of the PR, if you are more
+comfortable working that way.
+
+1. Click the **Files changed** tab.
+2. Scroll down to the file you want to edit, and click the pencil icon for
+ that file.
+3. Make your changes, add a commit message in the field below the editor, and
+ click **Commit changes**.
+
+Your commit is now pushed to the branch the PR represents (probably on the
+author's fork) and now shows up in the PR and your changes are reflected in
+the **Files changed** tab. Leave a comment letting the PR author know you
+changed the PR.
+
+If the author is using the command line rather than the Github UI to work on
+this PR, they need to fetch their fork's changes and rebase their local branch
+on the branch in their fork, before doing additional work on the PR.
+
+#### If the file has not yet been changed by the PR
+
+If changes need to be made to a file that is not yet included in the PR, you
+need to use the command line. You can always use this method, if you prefer it
+to the Github UI.
+
+1. Get the URL for the author's fork. You can find it near the bottom of the
+ **Conversation** tab. Look for the text "Add more commits by pushing to".
+ The first link after this phrase is to the branch, and the second link is
+ to the fork. Copy the second link. Note the name of the branch for later.
+
+2. Add the fork as a remote. In your terminal, go to your clone of the
+ repository. Decide on a name to give the remote (such as the author's
+ Github username), and add the remote using the following syntax:
+
+ ```bash
+ git remote add
+ ```
+
+3. Fetch the remote. This doesn't change any local files, but updates your
+ clone's notion of the remote's objects (such as branches and tags) and
+ their current state.
+
+ ```bash
+ git remote fetch
+ ```
+
+4. Check out the remote branch. This command will fail if you already have a
+ local branch with the sane name.
+
+ ```bash
+ git checkout
+ ```
+
+5. Make your changes, use `git add` to add them, and commit them.
+
+6. Push your changes to the author's remote.
+
+ ```bash
+ git push
+ ```
+
+7. Go back to the Github IU and refresh the PR. Your changes appear. Leave the
+ PR author a comment letting them know you changed the PR.
+
+If the author is using the command line rather than the Github UI to work on
+this PR, they need to fetch their fork's changes and rebase their local branch
+on the branch in their fork, before doing additional work on the PR.
+
+## Work from a local clone
+
+For changes that require multiple files or changes that involve creating new
+files or moving files around, working from a local Git clone makes more sense
+than relying on the Github UI. These instructions use the `git` command and
+assume that you have it installed locally. You can adapt them to use a local
+graphical Git client instead.
+
+### Clone the repository
+
+You only need to clone the repository once per physical system where you work
+on the Kubernetes documentation.
+
+1. In a terminal window, use `git clone` to clone the repository. You do not
+ need any credentials to clone the repository.
+
+ ```bash
+ git clone https://github.com/kubernetes/website
+ ```
+
+ The new directory `website` is created in your current directory, with
+ the contents of the Github repository.
+
+2. Change to the new `website` directory. Rename the default `origin` remote
+ to `upstream`.
+
+ ```bash
+ cd website
+
+ git remote rename origin upstream
+ ```
+
+3. If you have not done so, create a fork of the repository on Github. In your
+ web browser, go to
+ [https://github.com/kubernetes/website](https://github.com/kubernetes/website)
+ and click the **Fork** button. After a few seconds, you are redirected to
+ the URL for your fork, which is typically something like
+ `https://github.com//website` unless you already had a repository
+ called `website`. Copy this URL.
+
+4. Add your fork as a second remote, called `origin`:
+
+ ```bash
+ git remote add origin
+ ```
+
+### Work on the local repository
+
+Before you start a new unit of work on your local repository, you need to figure
+out which branch to base your work on. The answer depends on what you are doing,
+but the following guidelines apply:
+
+- For general improvements to existing content, start from `master`.
+- For new content that is about features that already exist in a released
+ version of Kubernetes, start from `master`.
+- For long-running efforts that multiple SIG Docs contributors will collaborate on,
+ such as content reorganization, use a specific feature branch created for that
+ effort.
+- For new content that relates to upcoming but unreleased Kubernetes versions,
+ use the pre-release feature branch created for that Kubernetes version.
+
+For more guidance, see
+[Choose which branch to use](/docs/contribute/start/#choose-which-git-branch-to-use).
+
+After you decide which branch to start your work (or _base it on_, in Git
+terminology), use the following workflow to be sure your work is based on the
+most up-to-date version of that branch.
+
+1. Fetch both the `upstream` and `origin` branches. This updates your local
+ notion of what those branches contain, but does not change your local
+ branches at all.
+
+ ```bash
+ git fetch upstream
+ git fetch origin
+ ```
+
+2. Create a new tracking branch based on the branch you decided is the most
+ appropriate. This example assumes you are using `master`.
+
+ ```bash
+ git checkout -b upstream/master
+ ```
+
+ This new branch is based on `upstream/master`, not your local `master`.
+ It tracks `upstream/master`.
+
+3. With your new branch checked out, make your changes using a text editor.
+ At any time, use the `git status` command to see what you've changed.
+
+4. When you are ready to submit a pull request, commit your changes. First
+ use `git status` to see what changes need to be added to the changeset.
+ There are two important sections: `Changes staged for commit` and
+ `Changes not staged for commit`. Any files that show up in the latter
+ section under `modified` or `untracked` need to be added if you want them to
+ be part of this commit. For each file that needs to be added, use `git add`.
+
+ ```bash
+ git add example-file.md
+ ```
+
+ When all your intended changes are included, create a commit, using the
+ `git commit` command:
+
+ ```bash
+ git commit -m "Your commit message"
+ ```
+
+ {{< note >}}
+Do not reference a Github issue or pull request by ID or URL in the
+commit message. If you do, it will cause that issue or pull request to get
+a notification every time the commit shows up in a new Git branch. You can
+link issues and pull requests together later, in the Github UI.
+{{< /note >}}
+
+5. Optionally, you can test your change by staging the site locally using the
+ `hugo` command. See [View your changes locally](#view-your-changes-locally).
+ You'll be able to view your changes after you submit the pull request, as
+ well.
+
+6. Before you can create a pull request which includes your local commit, you
+ need to push the branch to your fork, which is the endpoint for the `origin`
+ remote.
+
+ ```bash
+ git push origin
+ ```
+
+ Technically, you can omit the branch name from the `push` command, but
+ the behavior in that case depends upon the version of Git you are using.
+ The results are more repeatable if you include the branch name.
+
+7. At this point, if you go to https://github.com/kubernetes/website in your
+ web browser, Github detects that you pushed a new branch to your fork and
+ offers to create a pull request. Fill in the pull request template.
+
+ - The title should be no more than 50 characters and summarize the intent
+ of the change.
+ - The long-form description should contain more information about the fix,
+ including a line like `Fixes #12345` if the pull request fixes a Github
+ issue. This will cause the issue to be closed automatically when the
+ pull request is merged.
+ - You can add labels or other metadata and assign reviewers. See
+ [Triage and categorize issues](#triage-and-categorize-issues) for the
+ syntax.
+
+ Click **Create pull request**.
+
+8. Several automated tests will run against the state of the website with your
+ changes applied. If any of the tests fails, click the **Details** link for
+ more information. If the Netlify test completes successfully, its
+ **Details** link goes to a staged version of the Kubernetes website with
+ your changes applied. This is how reviewers will check your changes.
+
+9. If you notice that more changes need to be made, or if reviewers give you
+ feedback, address the feedback locally, then repeat step 4 - 6 again,
+ creating a new commit. The new commit is added to your pull request and the
+ tests run again, including re-staging the Netlify staged site.
+
+10. If a reviewer adds changes to your pull request, you need to fetch those
+ changes from your fork before you can add more changes. Use the following
+ commands to do this, assuming that your branch is currently checked out.
+
+ ```bash
+ git fetch origin
+ git rebase origin/
+ ```
+
+ After rebasing, you need to add the `-f` flag to force-push new changes to
+ the branch to your fork.
+
+ ```bash
+ git push -f origin
+ ```
+
+11. If someone else's change is merged into the branch your work is based on,
+ and you have made changes to the same parts of the same files, a conflict
+ might occur. If the pull request shows that there are conflicts to resolve,
+ you can resolve them using the Github UI or you can resolve them locally.
+
+ First, do step 10 to be sure that your fork and your local branch are in
+ the same state.
+
+ Next, fetch `upstream` and rebase your branch on the branch it was
+ originally based on, like `upstream/master`.
+
+ ```bash
+ git fetch upstream
+ git rebase upstream/master
+ ```
+
+ If there are conflicts Git can't automatically resolve, you can see the
+ conflicted files using the `git status` command. For each conflicted file,
+ edit it and look for the conflict markers `>>>`, `<<<`, and `===`. Resolve
+ the conflict and remove the conflict markers. Then add the changes to the
+ changeset using `git add ` and continue the rebase using
+ `git rebase --continue`. When all commits have been applied and there are
+ no more conflicts, `git status` will show that you are not in a rebase and
+ there are no changes that need to be committed. At that point, force-push
+ the branch to your fork, and the pull request should no longer show any
+ conflicts.
+
+If you're having trouble resolving conflicts or you get stuck with
+anything else related to your pull request, ask for help on the `#sig-docs`
+Slack channel or the
+[kubernetes-sig-docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs).
+
+### View your changes locally
+
+If you aren't ready to create a pull request but you want to see what your
+changes look like, you can use the `hugo` command to stage the changes locally.
+
+1. Install Hugo version `0.40.3` or later.
+
+2. In a terminal, go to the root directory of your clone of the Kubernetes
+ docs, and enter this command:
+
+ ```bash
+ hugo server
+ ```
+
+3. In your browser’s address bar, enter `localhost:1313`.
+
+4. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`
+ or just close the terminal window.
+
+
+## Triage and categorize issues
+
+In any given week, a specific docs approver volunteers to do initial
+[triage and review of pull requests](#review-pull-requests) and issues. To get
+on this list, attend the weekly SIG Docs meeting and volunteer. Even if you are
+not on the schedule for the current week, you can still review PRs.
+
+People in SIG Docs are only responsible for triaging and categorizing
+documentation issues. General website issues are also filed in the
+`kubernetes/website` repository.
+
+When you triage an issue, you:
+
+- Assess whether the issue has merit. Some issues can be closed quickly by
+ answering a question or pointing the reporter to a resource.
+- Ask the reporter for more information if the issue doesn't have enough
+ detail to be actionable or the template is not filled out adequately.
+- Add labels (sometimes called tags), projects, or milestones to the issue.
+ Projects and milestones are not heavily used by the SIG Docs team.
+- At your discretion, taking ownership of an issue and submitting a PR for it
+ (especially if it is quick or relates to work you were already doing).
+
+If you have questions about triaging an issue, ask in `#sig-docs` on Slack or
+the
+[kubernetes-sig-docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs).
+
+### More about labels
+
+These guidelines are not set in stone and are subject to change.
+
+- An issue can have multiple labels.
+- Some labels use slash notation for grouping, which can be thought of like
+ "sub-labels". For instance, many `sig/` labels exist, such as `sig/cli` and
+ `sig/api-machinery`.
+- Some labels are automatically added based on metadata in the files involved
+ in the issue, slash commands used in the comments of the issue, or
+ information in the issue text.
+- Some labels are manually added by the person triaging the issue (or the person
+ reporting the issue, if they are a SIG Docs approvers).
+ - `Actionable`: there seems to be enough information for the issue to be fixed
+ or acted upon.
+ - `good first issue`: Someone with limited Kubernetes or SIG Docs experience
+ might be able to tackle this issue.
+ - `kind/bug`, `kind/feature`, and `kind/documentation`: If the person who
+ filed the issue did not fill out the template correctly, these labels may
+ not be assigned automatically. A bug is a problem with existing content or
+ functionality, and a feature is a request for new content or functionality.
+ The `kind/documentation` label is not currently in use.
+ - Priority labels: define the relative severity of the issue. These do not
+ conform to those outlined in the
+ [Kubernetes contributor guide](https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#define-priority), and can be one of `P1`, `P2`, or `P3`, if set.
+- To add a label, you can use Github's **Labels** widget if you are a Sig Docs
+ approver. Anyone who is a member of the Kubernetes organization can add a
+ label by leaving a comment like `/label `. The label must
+ already exist. If you try to add a label that does not exist, the command is
+ silently ignored.
+
+### Priorities
+
+An issue's priority influences how quickly it is addressed. For documentation,
+here are the guidelines for setting a priority on an issue:
+
+#### P1
+
+- Major content errors affecting more than 1 page
+- Broken code sample on a heavily trafficked page
+- Errors on a “getting started” page
+- Well known or highly publicized customer pain points
+- Automation issues
+
+#### P2
+
+This is the default for new issues and pull requests.
+
+- Broken code for sample that is not heavily used
+- Minor content issues in a heavily trafficked page
+- Major content issues on a lower-trafficked page
+
+#### P3
+
+- Typos and broken anchor links
+- Documentation feature requests
+- "Nice to have" items
+
+### Handling special issue types
+
+We've encountered the following types of issues often enough to document how
+to handle them.
+
+#### Duplicate issues
+
+If a single problem has one or more issues open for it, the problem should be
+consolidated into a single issue. You should decide which issue to keep open (or
+open a new issue), port over all relevant information, link related issues, and
+close all the other issues that describe the same problem. Only having a single
+issue to work on will help reduce confusion and avoid duplicating work on the
+same problem.
+
+#### Dead link issues
+
+Depending on where the dead link is reported, different actions are required to
+resolve the issue. Dead links in the API and Kubectl docs are automation issues
+and should be assigned a P1 until the problem can be fully understood. All other
+dead links are issues that need to be manually fixed and can be assigned a P3.
+
+#### Support requests or code bug reports
+
+Some issues opened for docs are instead issues with the underlying code, or
+requests for assistance when something (like a tutorial) didn’t work. For issues
+unrelated to docs, close the issue with a comment directing the requester to
+support venues (Slack, Stack Overflow) and, if relevant, where to file an issue
+for bugs with features (kubernetes/kubernetes is a great place to start).
+
+Sample response to a request for support:
+
+```none
+This issue sounds more like a request for support and less
+like an issue specifically for docs. I encourage you to bring
+your question to the `#kubernetes-users` channel in
+[Kubernetes slack](http://slack.k8s.io/). You can also search
+resources like
+[Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
+for answers to similar questions.
+
+You can also open issues for Kubernetes functionality in
+ https://github.com/kubernetes/kubernetes.
+
+If this is a documentation issue, please re-open this issue.
+```
+
+Sample code bug report response:
+
+```none
+This sounds more like an issue with the code than an issue with
+the documentation. Please open an issue at
+https://github.com/kubernetes/kubernetes/issues.
+
+If this is a documentation issue, please re-open this issue.
+```
+
+## Document new features
+
+Each major Kubernetes release includes new features, and many of them need
+at least a small amount of documentation to show people how to use them.
+
+Often, the SIG responsible for a feature submits draft documentation for the
+feature as a pull request to the appropriate release branch of
+`kubernetes/website` repository, and someone on the SIG Docs team provides
+editorial feedback or edits the draft directly.
+
+### Find out about upcoming features
+
+To find out about upcoming features, attend the weekly sig-release meeting (see
+the [community](https://kubernetes.io/community/) page for upcoming meetings)
+and monitor the release-specific documentation
+in the [kubernetes/sig-release](https://github.com/kubernetes/sig-release/)
+repository. Each release has a sub-directory under the [/sig-release/tree/master/releases/](https://github.com/kubernetes/sig-release/tree/master/releases)
+directory. Each sub-directory contains a release schedule, a draft of the release
+notes, and a document listing each person on the release team.
+
+- The release schedule contains links to all other documents, meetings,
+ meeting minutes, and milestones relating to the release. It also contains
+ information about the goals and timeline of the release, and any special
+ processes in place for this release. Near the bottom of the document, several
+ release-related terms are defined.
+
+ This document also contains a link to the **Feature tracking sheet**, which is
+ the official way to find out about all new features scheduled to go into the
+ release.
+- The release team document lists who is responsible for each release role. If
+ it's not clear who to talk to about a specific feature or question you have,
+ either attend the release meeting to ask your question, or contact the release
+ lead so that they can redirect you.
+- The release notes draft is a good place to find out a little more about
+ specific features, changes, deprecations, and more about the release. The
+ content is not finalized until late in the release cycle, so use caution.
+
+#### The feature tracking sheet
+
+The feature tracking sheet
+[for a given Kubernetes release](https://github.com/kubernetes/sig-release/tree/master/releases) lists each feature that is planned for a release.
+Each line item includes the name of the feature, a link to the feature's main
+Github issue, its stability level (Alpha, Beta, or Stable), the SIG and
+individual responsible for implementing it, whether it
+needs docs, a draft release note for the feature, and whether it has been
+merged. Keep the following in mind:
+
+- Beta and Stable features are generally a higher documentation priority than
+ Alpha features.
+- It's hard to test (and therefore, document) a feature that hasn't been merged,
+ or is at least considered feature-complete in its PR.
+- Determining whether a feature needs documentation is a manual process and
+ just because a feature is not marked as needing docs doesn't mean it doesn't
+ need them.
+
+### Document a feature
+
+As stated above, draft content for new features is usually submitted by the SIG
+responsible for implementing the new feature. This means that your role may be
+more of a shepherding role for a given feature than developing the documentation
+from scratch.
+
+After you've chosen a feature to document/shepherd, ask about it in the `#sig-docs`
+Slack channel, in a weekly sig-docs meeting, or directly on the PR filed by the
+feature SIG. If you're given the go-ahead, you can edit into the PR using one of
+the techniques described in
+[Commit into another person's PR](#commit-into-another-persons-pr).
+
+If you need to write a new topic, the following links are useful:
+- [Writing a New Topic](/docs/contribute/style/write-new-topic/)
+- [Using Page Templates](/docs/contribute/style/page-templates/)
+- [Documentation Style Guide](/docs/contribute/style/style-guide/)
+
+### SIG members documenting new features
+
+If you are a member of a SIG developing a new feature for Kubernetes, you need
+to work with SIG Docs to be sure your feature is documented in time for the
+release. Check the
+[feature tracking spreadsheet](https://github.com/kubernetes/sig-release/tree/master/releases)
+or check in the #sig-release Slack channel to verify scheduling details and
+deadlines. Some deadlines related to documentation are:
+
+- **Docs deadline - Open placeholder PRs**: Open a pull request against the
+ `release-X.Y` branch in the `kubernetes/website` repository, with a small
+ commit that you will amend later. Use the Prow command `/milestone X.Y` to
+ assign the PR to the relevant milestone. This alerts the docs person managing
+ this release that the feature docs are coming. If your feature does not need
+ any documentation changes, make sure the sig-release team knows this, by
+ mentioning it in the #sig-release Slack channel. If the feature does need
+ documentation but the PR is not created, the feature may be removed from the
+ milestone.
+- **Docs deadline - PRs ready for review**: Your PR now needs to contain a first
+ draft of the documentation for your feature. Don't worry about formatting or
+ polishing. Just describe what the feature does and how to use it. The docs
+ person managing the release will work with you to get the content into shape
+ to be published. If your feature needs documentation and the first draft
+ content is not received, the feature may be removed from the milestone.
+- **Docs complete - All PRs reviewed and ready to merge**: If your PR has not
+ yet been merged into the `release-X.Y` branch by this deadline, work with the
+ docs person managing the release to get it in. If your feature needs
+ documentation and the docs are not ready, the feature may be removed from the
+ milestone.
+
+## Contribute to other repos
+
+The [Kubernetes project](https://github.com/kubernetes) contains more than 50
+individual repositories. Many of these repositories contain code or content that
+can be considered documentation, such as user-facing help text, error messages,
+user-facing text in API references, or even code comments.
+
+If you see text and you aren't sure where it comes from, you can use Github's
+search tool at the level of the Kubernetes organization to search through all
+repositories for that text. This can help you figure out where to submit your
+issue or PR.
+
+Each repository may have its own processes and procedures. Before you file an
+issue or submit a PR, read that repository's `README.md`, `CONTRIBUTING.md`, and
+`code-of-conduct.md`, if they exist.
+
+Most repositories use issue and PR templates. Have a look through some open
+issues and PRs to get a feel for that team's processes. Make sure to fill out
+the templates with as much detail as possible when you file issues or PRs.
+
+## Localize content
+
+The Kubernetes documentation is written in English first, but we want people to
+be able to read it in their language of choice. If you are comfortable
+writing in another language, especially in the software domain, you can help
+localize the Kubernetes documentation or provide feedback on existing localized
+content. See [Localization](/docs/contribute/localization/) and ask on the
+[kubernetes-sig-docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
+or in `#sig-docs` on Slack if you are interested in helping out.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+When you are comfortable with all of the tasks discussed in this topic and you
+want to engage with the Kubernetes docs team in even deeper ways, read the
+[advanced docs contributor](/docs/contribute/advanced/) topic.
+
+{{% /capture %}}
diff --git a/content/en/docs/home/contribute/localization.md b/content/en/docs/contribute/localization.md
similarity index 100%
rename from content/en/docs/home/contribute/localization.md
rename to content/en/docs/contribute/localization.md
diff --git a/content/en/docs/contribute/participating.md b/content/en/docs/contribute/participating.md
new file mode 100644
index 000000000..a5fc0bb66
--- /dev/null
+++ b/content/en/docs/contribute/participating.md
@@ -0,0 +1,304 @@
+---
+title: Participating in SIG Docs
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+SIG Docs is one of the
+[special interest groups](https://github.com/kubernetes/community/blob/master/sig-list.md)
+within the Kubernetes project, focused on writing, updating, and maintaining
+the documentation for Kubernetes as a whole. See
+[SIG Docs from the community github repo](https://github.com/kubernetes/community/tree/master/sig-docs)
+for more information about the SIG.
+
+SIG Docs welcomes content and reviews from all contributors. Anyone can open a
+pull request (PR), and anyone is welcome to file issues about content or comment
+on pull requests in progress.
+
+Within SIG Docs, you may also become a [member](#members),
+[reviewer](#reviewers), or [approver](#approvers). These roles require greater
+access and entail certain responsibilities for approving and committing changes.
+See [community-membership](https://github.com/kubernetes/community/blob/master/community-membership.md)
+for more information on how membership works within the Kubernetes community.
+The rest of this document outlines some unique ways these roles function within
+SIG Docs, which is responsible for maintaining one of the most public-facing
+aspects of Kubernetes -- the Kubernetes website and documentation.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Roles and responsibilities
+
+When a pull request is merged to the branch used to publish content (currently
+`master`), that content is published and available to the world. To ensure that
+the quality of our published content is high, we limit merging pull requests to
+SIG Docs approvers. Here's how it works.
+
+- When a pull request has both the `lgtm` and `approve` labels and has no `hold`
+ labels, the pull request merges automatically.
+- Kubernetes organization members and SIG Docs approvers can add comments to
+ prevent automatic merging of a given pull request (by adding a `/hold` comment
+ or withholding a `/lgtm` comment).
+- Any Kubernetes member can add the `lgtm` label, by adding a `/lgtm` comment.
+- Only an approver who is a member of SIG Docs can cause a pull request to merge
+ by adding an `/approve` comment. Some approvers also perform additional
+ specific roles, such as [PR Wrangler](#pr-wrangler) or
+ [SIG Docs chairperson](#sig-docs-chairperson).
+
+For more information about expectations and differences between the roles of
+Kubernetes organization member and SIG Docs approvers, see
+[Types of contributor](/docs/contribute#types-of-contributor). The following
+sections cover more details about these roles and how they work within
+SIG Docs.
+
+### Anyone
+
+Anyone can file an issue against any part of Kubernetes, including documentation.
+
+Anyone who has signed the CLA can submit a pull request. If you cannot sign the
+CLA, the Kubernetes project cannot accept your contribution.
+
+### Members
+
+Any member of the [Kubernetes organization](https://github.com/kubernetes) can
+review a pull request, and SIG Docs team members frequently request reviews from
+members of other SIGs for technical accuracy.
+SIG Docs also welcomes reviews and feedback regardless of a person's membership
+status in the Kubernetes organization. You can indicate your approval by adding
+a comment of `/lgtm` to a pull request. If you are not a member of the
+Kubernetes organization, your `/lgtm` has no effect on automated systems.
+
+Any member of the Kubernetes organization can add a `/hold` comment to prevent
+the pull request from being merged. Any member can also remove a `/hold` comment
+to cause a PR to be merged if it already has both `/lgtm` and `/approve` applied
+by appropriate people.
+
+#### Becoming a member
+
+After you have successfully submitted at least 5 substantive pull requests, you
+can request [membership](https://github.com/kubernetes/community/blob/master/community-membership.md#member)
+in the Kubernetes organization. Follow these steps:
+
+1. Find two reviewers or approvers to [sponsor](/docs/contribute/advanced#sponsor-a-new-contributor)
+ your membership.
+
+ Ask for sponsorship in the #sig-docs channel on the
+ Kubernetes Slack instance](https://kubernetes.slack.com) or on the
+ [SIG Docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs).
+
+ {{< note >}}
+ Don't send a direct email or Slack direct message to an individual
+ SIG Docs member.
+ {{< /note >}}
+
+2. Send an email to the [Kubernetes membership request list](mailto:kubernetes-membership@googlegroups.com)
+ and add your two sponsors and any other relevant people to the CC of the
+ email. Use the following template.
+
+ ```plaintext
+ I have joined kubernetes-dev@googlegroups.com and fulfilled all the
+ prerequisites outlined at
+ https://github.com/kubernetes/community/blob/master/community-membership.md.
+
+ Sponsors:
+ - Github username / email address
+ - Github username / email address
+
+ List of contributions:
+ - PR URL or other link - description or summary
+ - PR URL or other link - description or summary
+ - PR URL or other link - description or summary
+ - PR URL or other link - description or summary
+ - PR URL or other link - description or summary
+
+ Thanks for your consideration,
+ Your Name
+ ```
+
+3. Wait for your sponsors to reply, and be available to answer any questions
+ that your sponsors or other Kubernetes leadership has, and for the final
+ result of your application.
+
+If for some reason your membership request is not accepted right away, the
+membership committee provides information or steps to take before applying
+again.
+
+### Reviewers
+
+Reviewers are members of the
+[@kubernetes/sig-docs-pr-reviews](https://github.com/orgs/kubernetes/teams/sig-docs-pr-reviews)
+Github group. See [Teams and groups within SIG Docs](#teams-and-groups-within-sig-docs).
+
+Reviewers review documentation pull requests and provide feedback on proposed
+changes.
+
+Automation assigns reviewers to pull requests, and contributors can request a
+review from a specific reviewer with a comment on the pull request: `/assign
+[@_github_handle]`. To indicate that a pull request is technically accurate and
+requires no further changes, a reviewer adds a `/lgtm` comment to the pull
+request.
+
+If the assigned reviewer has not yet reviewed the content, another reviewer can
+step in. In addition, you can assign technical reviewers and wait for them to
+provide `/lgtm`.
+
+For a trivial change or one that needs no technical review, the SIG Docs
+[approver](#approvers) can provide the `/lgtm` as well.
+
+A `/approve` comment from a reviewer is ignored by automation.
+
+For more about how to become a SIG Docs reviewer and the responsibilities and
+time commitment involved, see
+[Becoming a reviewer or approver](#becoming-an-approver-or-reviewer).
+
+#### Becoming a reviewer
+
+When you meet the
+[requirements](https://github.com/kubernetes/community/blob/master/community-membership.md#reviewer),
+you can become a SIG Docs reviewer. Reviewers in other SIGs must apply
+separately for reviewer status in SIG Docs.
+
+To apply, open a pull request to add yourself to the `reviewers` section of the
+[top-level OWNERS file](https://github.com/kubernetes/website/blob/master/OWNERS)
+in the `kubernetes/website` repository. Assign the PR to one or more current SIG
+Docs approvers.
+
+If your pull request is approved, you are now a SIG Docs reviewer.
+[K8s-ci-robot](https://github.com/kubernetes/test-infra/tree/master/prow#bots-home)
+will assign and suggest you as a reviewer on new pull requests.
+
+If you are approved, request that a current SIG Docs approver add you to the
+[@kubernetes/sig-docs-pr-reviews](https://github.com/orgs/kubernetes/teams/sig-docs-pr-reviews)
+Github group. Only members of the `kubernetes-website-admins` Github group can
+add new members to a Github group.
+
+### Approvers
+
+Approvers are members of the
+[@kubernetes/sig-docs-maintainers](https://github.com/orgs/kubernetes/teams/sig-docs-maintainers)
+Github group. See [Teams and groups within SIG Docs](#teams-and-groups-within-sig-docs).
+
+Approvers have the ability to merge a PR, and thus, to publish content on the
+Kubernetes website. To approve a PR, an approver leaves an `/approve` comment on
+the PR. If someone who is not an approver leaves the approval comment,
+automation ignores it.
+
+If the PR already has a `/lgtm`, or if the approver also comments with `/lgtm`,
+the PR merges automatically. A SIG Docs approver should only leave a `/lgtm` on
+a change that doesn't need additional technical review.
+
+For more about how to become a SIG Docs approver and the responsibilities and
+time commitment involved, see
+[Becoming a reviewer or approver](#becoming-an-approver-or-reviewer).
+
+#### Becoming an approver
+
+When you meet the
+[requirements](https://github.com/kubernetes/community/blob/master/community-membership.md#approver),
+you can become a SIG Docs approver. Approvers in other SIGs must apply
+separately for approver status in SIG Docs.
+
+To apply, open a pull request to add yourself to the `approvers` section of the
+[top-level OWNERS file](https://github.com/kubernetes/website/blob/master/OWNERS)
+in the `kubernetes/website` repository. Assign the PR to one or more current SIG
+Docs approvers.
+
+If your pull request is approved, you are now a SIG Docs approver.
+[K8s-ci-robot](https://github.com/kubernetes/test-infra/tree/master/prow#bots-home)
+will assign and suggest you as a reviewer on new pull requests.
+
+If you are approved, request that a current SIG Docs approver add you to the
+[@kubernetes/sig-docs-maintainers](https://github.com/orgs/kubernetes/teams/sig-docs-maintainers)
+Github group. Only members of the `kubernetes-website-admins` Github group can
+add new members to a Github group.
+
+#### Becoming a website admin
+
+Members of the `kubernetes-website-admins` Github group can manage Github group
+membership and have full administrative rights to the settings of the repository,
+including the ability to add, remove, and troubleshoot webhooks. Not all SIG
+Docs approvers need this level of access.
+
+If you think you need this level of access, talk to an existing website admin or
+ask in the #sig-docs channel on [Kubernetes Slack](https://kubernetes.slack.com).
+
+#### PR Wrangler
+
+SIG Docs approvers are added to the
+[PR Wrangler rotation scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers)
+for weekly rotations. All SIG Docs approvers are expected to take part in this
+rotation. See
+[Be the PR Wrangler for a week](/docs/contribute/advanced#be-the-pr-wrangler-for-a-week)
+for more details.
+
+#### SIG Docs chairperson
+
+Each SIG, including SIG Docs, selects one or more SIG members to act as
+chairpersons. These are points of contact between SIG Docs and other parts of
+the Kubernetes organization. They require extensive knowledge of the structure
+of the Kubernetes project as a whole and how SIG Docs works within it. See
+[Leadership](https://github.com/kubernetes/community/tree/master/sig-docs#leadership)
+for the current list of chairpersons.
+
+## SIG Docs teams and automation
+
+Automation in SIG Docs relies on two different mechanisms for automation:
+Github groups and OWNERS files.
+
+### Github groups
+
+The SIG Docs group defines two teams on Github:
+
+ - [@kubernetes/sig-docs-maintainers](https://github.com/orgs/kubernetes/teams/sig-docs-maintainers)
+ - [@kubernetes/sig-docs-pr-reviews](https://github.com/orgs/kubernetes/teams/sig-docs-pr-reviews)
+
+Each can be referenced with their `@name` in Github comments to communicate with
+everyone in that group.
+
+These teams overlap, but do not exactly match, the groups used by the automation
+tooling. For assignment of issues, pull requests, and to support PR approvals,
+the automation uses information from OWNERS files.
+
+### OWNERS files and front-matter
+
+The Kubernetes project uses an automation tool called prow for automation
+related to Github issues and pull requests. The
+[Kubernetes website repository](https://github.com/kubernetes/website) uses
+two [prow plugins](https://github.com/kubernetes/test-infra/blob/master/prow/plugins.yaml#L210):
+
+- blunderbuss
+- approve
+
+These two plugins use the
+[OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS) and
+[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/master/OWNERS_ALIASES)
+files in the top level of the `kubernetes/website` Github repository to control
+how prow works within the repository.
+
+An OWNERS file contains a list of people who are SIG Docs reviewers and
+approvers. OWNERS files can also exist in subdirectories, and can override who
+can act as a reviewer or approver of files in that subdirectory and its
+descendents. For more information about OWNERS files in general, see
+[OWNERS](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md).
+
+In addition, an individual Markdown file can list reviewers and approvers in its
+front-matter, either by listing individual Github usernames or Github groups.
+
+The combination of OWNERS files and front-matter in Markdown files determines
+the advice PR owners get from automated systems about who to ask for technical
+and editorial review of their PR.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+For more information about contributing to the Kubernetes documentation, see:
+
+- [Start contributing](/docs/contribute/start/)
+- [Documentation style](/docs/contribute/style/)
+
+{{% /capture %}}
+
+
diff --git a/content/en/docs/contribute/start.md b/content/en/docs/contribute/start.md
new file mode 100644
index 000000000..0a18d0164
--- /dev/null
+++ b/content/en/docs/contribute/start.md
@@ -0,0 +1,355 @@
+---
+title: Start contributing
+slug: start
+content_template: templates/concept
+weight: 10
+---
+
+{{% capture overview %}}
+
+If you want to get started contributing to the Kubernetes documentation, this
+page and its linked topics can help you get started. You don't need to be a
+developer or a technical writer to make a big impact on the Kubernetes
+documentation and user experience! All you need for the topics on this page is
+a [Github account](https://github.com/join) and a web browser.
+
+If you're looking for information on how to start contributing to Kubernetes
+code repositories, refer to
+[the Kubernetes community guidelines](https://github.com/kubernetes/community/blob/master/governance.md).
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## The basics about our docs
+
+The Kubernetes documentation is written in Markdown and processed and deployed
+using Hugo. The source is in Github at
+[https://github.com/kubernetes/website](https://github.com/kubernetes/website).
+Most of the documentation source is stored in `/content/en/docs/`. Some of the
+reference documentation is automatically generated from scripts, mostly in the
+`/content/en/docs/imported/` subdirectory.
+
+You can file issues, edit content, and review changes from others, all from the
+Github website. You can also use Github's embedded history and search tools.
+
+Not all tasks can be done in the Github UI, but these are discussed in the
+[intermediate](/docs/contribute/intermediate/) and
+[advanced](/docs/contribute/advanced/) docs contribution guides.
+
+### Participating in SIG Docs
+
+The Kubernetes documentation is maintained by a special interest group (SIG)
+called Sig Docs. We communicate using a Slack channel, a mailing list, and
+weekly video meetings. New participants are welcome. For more information, see
+[Participating in SIG Docs](/docs/contribute/participating/).
+
+### Style guidelines
+
+We maintain a [style guide](/docs/contribute/style/style-guide/) with information
+about choices the SIG Docs community has made about grammar, syntax, source
+formatting, and typographic conventions. Look over the style guide before you
+make your first contribution, and use it when you have questions.
+
+Changes to the style guide are made by SIG Docs as a group. To propose a change
+or addition, [add it to the agenda](https://docs.google.com/document/d/1Ds87eRiNZeXwRBEbFr6Z7ukjbTow5RQcNZLaSvWWQsE/edit#) for an upcoming SIG Docs meeting, and attend the meeting to participate in the
+discussion. See the [advanced contribution](/docs/contribute/advanced/) topic for more
+information.
+
+### Page templates
+
+We use page templates to control the presentation of our documentation pages.
+Be sure to understand how these templates work by reviewing
+[Using page templates](/docs/contribute/style/page-templates/).
+
+### Hugo shortcodes
+
+The Kubernetes documentation is transformed from Markdown to HTML using Hugo.
+We make use of the standard Hugo shortcodes, as well as a few that are custom to
+the Kubernetes documentation. See [Custom Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/) for
+information about how to use them.
+
+## File actionable issues
+
+Anyone with a Github account can file an issue (bug report) against the
+Kubernetes documentation. If you see something wrong, even if you have no idea
+how to fix it, [file an issue](#how-to-file-an-issue). The exception to this
+rule is a tiny bug like a typo that you intend to fix yourself. In that case,
+you can instead [fix it](#fix-it) without filing a bug first.
+
+### How to file an issue
+
+- **On an existing page**
+
+ If you see a problem in an existing page in the [Kubernetes docs](/docs/),
+ go to the bottom of the page and click the **Create an Issue** button. If
+ you are not currently logged in to Github, log in. A Github issue form
+ appears with some pre-populated content.
+
+ Using Markdown, fill in as many details as you can. In places where you see
+ empty square brackets (`[ ]`), put an `x` between the set of brackets that
+ represents the appropriate choice. If you have a proposed solution to fix
+ the issue, add it.
+
+- **Request a new page**
+
+ If you think content should exist, but you aren't sure where it should go or
+ you don't think it fits within the pages that currently exist, you can
+ still file an issue. You can either choose an existing page near where you think the
+ new content should go and file the issue from that page, or go straight to
+ [https://github.com/kubernetes/website/issues/new/](https://github.com/kubernetes/website/issues/new/)
+ and file the issue from there.
+
+### How to file great issues
+
+To ensure that we understand your issue and can act on it, keep these guidelines
+in mind:
+
+- Use the issue template, and fill out as many details as you can.
+- Clearly explain the specific impact the issue has on users.
+- Limit the scope of a given issue to a reasonable unit of work. For problems
+ with a large scope, break them down into smaller issues.
+
+ For instance, "Fix the security docs" is not an actionable issue, but "Add
+ details to the 'Restricting network access' topic" might be.
+- If the issue relates to another issue or pull request, you can refer to it
+ either by its full URL or by the the issue or pull request number prefixed
+ with a `#` character. For instance, `Introduced by #987654`.
+- Be respectful and avoid venting. For instance, "The docs about X suck" is not
+ helpful or actionable feedback. The
+ [Code of Conduct](/community/code-of-conduct/) also applies to interactions on
+ Kubernetes Github repositories.
+
+## Participate in SIG Docs discussions
+
+The SIG Docs team communicates using the following mechanisms:
+
+- [Join the Kubernetes Slack instance](http://slack.k8s.io/), then join the
+ `#sig-docs` channel, where we discuss docs issues in real-time. Be sure to
+ introduce yourself!
+- [Join the `kubernetes-sig-docs` mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs),
+ where broader discussions take place and official decisions are recorded.
+- Participate in the weekly SIG Docs video meeting, which is announced on the
+ Slack channel and the mailing list. Currently, these meetings take place on
+ Zoom, so you'll need to download the [Zoom client](https://zoom.us/download)
+ or dial in using a phone.
+
+## Improve existing content
+
+To improve existing content, you file a _pull request (PR)_ after creating a
+_fork_. Those two terms are [specific to Github](https://help.github.com/categories/collaborating-with-issues-and-pull-requests/).
+For the purposes of this topic, you don't need to know everything about them,
+because you can do everything using your web browser. When you continue to the
+[intermediate docs contributor guide](/docs/contribute/intermediate/), you will
+need more background in Git terminology.
+
+{{< note >}}
+**Kubnetes code developers**: If you are documenting a new feature for an
+upcoming Kubernetes release, your process is a bit different. See
+[Document a feature](/docs/contribute/intermediate/#sig-members-documenting-new-features) for
+process guidelines and information about deadlines.
+{{< /note >}}
+
+### Sign the CLA
+
+Before you can contribute code or documentation to Kubernetes, you **must** read
+the [Contributor guide](/docs/imported/community/guide/) and
+[sign the Contributor License Agreement (CLA)](/docs/imported/community/guide/#sign-the-cla).
+Don't worry -- this doesn't take long!
+
+### Find something to work on
+
+If you see something you want to fix right away, just follow the instructions
+below. You don't need to [file an issue](#file-actionable-issues) (although you
+certainly can).
+
+If you want to start by finding an existing issue to work on, go to
+[https://github.com/kubernetes/website/issues](https://github.com/kubernetes/website/issues)
+and look for issues with the label `good first issue` (you can use
+[this](https://github.com/kubernetes/website/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) shortcut). Read through the comments and make sure there is not an open pull
+request against the issue and that nobody has left a comment saying they are
+working on the issue recently (3 days is a good rule). Leave a comment saying
+that you would like to work on the issue.
+
+### Choose which Git branch to use
+
+The most important aspect of submitting pull requests is choosing which branch
+to base your work on. Use these guidelines to make the decision:
+
+- Use `master` for fixing problems in content that is already published, or
+ making improvements to content that already exists.
+- Use a release branch (such as `release-1.12`) to document upcoming features
+ or changes for an upcoming release that is not yet published.
+- Use a feature branch that has been agreed upon by SIG Docs to collaborate on
+ big improvements or changes to the existing documentation, including content
+ reorganization or changes to the look and feel of the website.
+
+If you're still not sure which branch to choose, ask in `#sig-docs` on Slack or
+attend a weekly SIG Docs meeting to get clarity.
+
+### Submit a pull request
+
+Follow these steps to submit a pull request to improve the Kubernetes
+documentation.
+
+1. On the page where you see the issue, click the pencil icon at the top left.
+ A new page appears, with some help text.
+2. Click the first blue button, which has the text **Edit **.
+
+ If you have never created a fork of the Kubernetes documentation
+ repository, you are prompted to do so. Create the fork under your Github
+ username, rather than another organization you may be a member of. The
+ fork usually has a URL such as `https://github.com//website`,
+ unless you already have a repository with a conflicting name.
+
+ The reason you are prompted to create a fork is that you do not have
+ access to push a branch directly to the definitive Kubernetes repository.
+
+3. The Github Markdown editor appears with the source Markdown file loaded.
+ Make your changes. Below the editor, fill in the **Propose file change**
+ form. The first field is the summary of your commit message and should be
+ no more than 50 characters long. The second field is optional, but can
+ include more detail if appropriate.
+
+ {{< note >}}
+**Note**: Do not include references to other Github issues or pull
+requests in your commit message. You can add those to the pull request
+description later.
+{{< /note >}}
+
+ Click **Propose file change**. The change is saved as a commit in a
+ new branch in your fork, which is automatically named something like
+ `patch-1`.
+
+4. The next screen summarizes the changes you made, by comparing your new
+ branch (the **head fork** and **compare** selection boxes) to the current
+ state of the **base fork** and **base** branch (`master` on the
+ `kubernetes/website` repository by default). You can change any of the
+ selection boxes, but don't do that now. Have a look at the difference
+ viewer on the bottom of the screen, and if everything looks right, click
+ **Create pull request**.
+
+ {{< note >}}
+**Note**: If you don't want to create the pull request now, you can do it
+later, by browsing to the main URL of the Kubernetes website repository or
+your fork's repository. The Github website will prompt you to create the
+pull request if it detects that you pushed a new branch to your fork.
+{{< /note >}}
+
+5. The **Open a pull request** screen appears. The subject of the pull request
+ is the same as the commit summary, but you can change it if needed. The
+ body is populated by your extended commit message (if present) and some
+ template text. Read the template text and fill out the details it asks for,
+ then delete the extra template text. Leave the
+ **Allow edits from maintainers** checkbox selected. Click
+ **Create pull request**.
+
+ Congratulations! Your pull request is available in
+ [Pull requests](https://github.com/kubernetes/website/pulls).
+
+ After a few minutes, you can preview the website with your PR's changes
+ applied. Go to the **Conversation** tab of your PR and click the **Details**
+ link for the `deploy/netlify` test, near the bottom of the page. It opens in
+ the same browser window by default.
+
+6. Wait for review. Generally, reviewers are suggested by the `k8s-ci-robot`.
+ If a reviewer asks you to make changes, you can go to the **Files changed**
+ tab and click the pencil icon on any files that have been changed by the
+ pull request. When you save the changed file, a new commit is created in
+ the branch being monitored by the pull request.
+
+7. If your change is accepted, a reviewer merges your pull request, and the
+ change is live on the Kubernetes website a few minutes later.
+
+This is only one way to submit a pull request. If you are already a Git and
+Github advanced user, you can use a local GUI or command-line Git client
+instead of using the Github UI. Some basics about using the command-line Git
+client are discussed in the [intermediate](/docs/contribute/intermediate/) docs
+contribution guide.
+
+## Review docs pull requests
+
+People who are not yet approvers or reviewers can still review pull requests.
+The reviews are not considered "binding", which means that your review alone
+won't cause a pull request to be merged. However, it can still be helpful. Even
+if you don't leave any review comments, you can get a sense of pull request
+conventions and etiquette and get used to the workflow.
+
+1. Go to
+ [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls).
+ You see a list of every open pull request against the Kubernetes website and
+ docs.
+
+2. By default, the only filter that is applied is `open`, so you don't see
+ pull requests that have already been closed or merged. It's a good idea to
+ apply the `cncf-cla: yes` filter, and for your first review, it's a good
+ idea to add `size/S` or `size/XS`. The `size` label is applied automatically
+ based on how many lines of code the PR modifies. You can apply filters using
+ the selection boxes at the top of the page, or use
+ [this shortcut](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3A%22cncf-cla%3A+yes%22+label%3Asize%2FS) for only small PRs. All filters are `AND`ed together, so
+ you can't search for both `size/XS` and `size/S` in the same query.
+
+3. Go to the **Files changed** tab. Look through the changes introduced in the
+ PR, and if applicable, also look at any linked issues. If you see a problem
+ or room for improvement, hover over the line and click the `+` symbol that
+ appears.
+
+ You can type a comment, and either choose **Add single comment** or **Start
+ a review**. Typically, starting a review is better because it allows you to
+ leave multiple comments and notifies the PR owner only when you have
+ completed the review, rather than a separate notification for each comment.
+
+4. When finished, click **Review changes** at the top of the page. You can
+ summarize your review, and you can choose to comment, approve, or request
+ changes. New contributors should always choose **Comment**.
+
+Thanks for reviewing a pull request! When you are new to the project, it's a
+good idea to ask for feedback on your pull request reviews. The `#sig-docs`
+Slack channel is a great place to do this.
+
+## Write a blog post
+
+Anyone can write a blog post and submit it for review. Blog posts should not be
+commercial in nature and should consist of content that will apply broadly to
+the Kubernetes community.
+
+To submit a blog post, you can either submit it using the
+[Kubernetes blog submission form](https://docs.google.com/forms/d/e/1FAIpQLSch_phFYMTYlrTDuYziURP6nLMijoXx_f7sLABEU5gWBtxJHQ/viewform),
+or follow the steps below.
+
+1. [Sign the CLA](#sign-the-cla) if you have not yet done so.
+2. Have a look at the Markdown format for existing blog posts in the
+ [website repository](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts).
+3. Write out your blog post in a text editor of your choice.
+4. On the same link from step 2, click the **Create new file** button. Paste
+ your content into the editor. Name the file to match the proposed title of
+ the blog post, but don't put the date in the file name. The blog reviewers
+ will work with you on the final file name and the date the blog will be
+ published.
+5. When you save the file, Github will walk you through the pull request
+ process.
+6. A blog post reviewer will review your submission and work with you on
+ feedback and final details. When the blog post is approved, the blog will be
+ scheduled for publication.
+
+## Submit a case study
+
+Case studies highlight how organizations are using Kubernetes to solve
+real-world problems. They are written in collaboration with the Kubernetes
+marketing team, which is handled by the CNCF.
+
+Have a look at the source for the
+[existing case studies](https://github.com/kubernetes/website/tree/master/content/en/case-studies).
+Use the [Kubernetes case study submission form](https://www.cncf.io/people/end-user-community/)
+to submit your proposal.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+When you are comfortable with all of the tasks discussed in this topic and you
+want to engage with the Kubernetes docs team in deeper ways, read the
+[intermediate docs contribution guide](/docs/contribute/intermediate/).
+
+{{% /capture %}}
diff --git a/content/en/docs/contribute/style/_index.md b/content/en/docs/contribute/style/_index.md
new file mode 100644
index 000000000..f7ad145f7
--- /dev/null
+++ b/content/en/docs/contribute/style/_index.md
@@ -0,0 +1,9 @@
+---
+title: Documentation style overview
+main_menu: true
+weight: 80
+---
+
+The topics in this section provide guidance on writing style, content formatting
+and organization, and using Hugo customizations specific to Kubernetes
+documentation.
diff --git a/content/en/docs/home/contribute/content-organization.md b/content/en/docs/contribute/style/content-organization.md
similarity index 91%
rename from content/en/docs/home/contribute/content-organization.md
rename to content/en/docs/contribute/style/content-organization.md
index 15098a050..526f755f1 100644
--- a/content/en/docs/home/contribute/content-organization.md
+++ b/content/en/docs/contribute/style/content-organization.md
@@ -1,8 +1,7 @@
---
-title: Content Organization
-date: 2018-04-30
+title: Content organization
content_template: templates/concept
-weight: 42
+weight: 40
---
{{< toc >}}
@@ -95,7 +94,7 @@ The site links in the top-right menu -- and also in the footer -- are built by p
In addition to standalone content pages (Markdown files), Hugo supports [Page Bundles](https://gohugo.io/content-management/page-bundles/).
-One example is [Custom Hugo Shortcodes](/docs/home/contribute/includes/). It is a socalled `leaf bundle`. Everything below the directory with the `index.md` will be part of the bundle, with page-relative links, images can be processed etc.:
+One example is [Custom Hugo Shortcodes](/docs/contribute/style/hugo-shortcodes/). It is a socalled `leaf bundle`. Everything below the directory with the `index.md` will be part of the bundle, with page-relative links, images can be processed etc.:
```bash
en/docs/home/contribute/includes
@@ -135,8 +134,8 @@ The `SASS` source of the stylesheets for this site is stored below `src/sass` an
{{% capture whatsnext %}}
-* [Custom Hugo Shortcodes](/docs/home/contribute/includes)
-* [Style Guide](/docs/home/contribute/style-guide)
+* [Custom Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/)
+* [Style guide](/docs/contribute/style/style-guide)
{{% /capture %}}
diff --git a/content/en/docs/home/contribute/includes/example1.md b/content/en/docs/contribute/style/hugo-shortcodes/example1.md
similarity index 100%
rename from content/en/docs/home/contribute/includes/example1.md
rename to content/en/docs/contribute/style/hugo-shortcodes/example1.md
diff --git a/content/en/docs/home/contribute/includes/example2.md b/content/en/docs/contribute/style/hugo-shortcodes/example2.md
similarity index 100%
rename from content/en/docs/home/contribute/includes/example2.md
rename to content/en/docs/contribute/style/hugo-shortcodes/example2.md
diff --git a/content/en/docs/home/contribute/includes/index.md b/content/en/docs/contribute/style/hugo-shortcodes/index.md
similarity index 100%
rename from content/en/docs/home/contribute/includes/index.md
rename to content/en/docs/contribute/style/hugo-shortcodes/index.md
diff --git a/content/en/docs/home/contribute/includes/podtemplate.json b/content/en/docs/contribute/style/hugo-shortcodes/podtemplate.json
similarity index 100%
rename from content/en/docs/home/contribute/includes/podtemplate.json
rename to content/en/docs/contribute/style/hugo-shortcodes/podtemplate.json
diff --git a/content/en/docs/contribute/style/page-templates.md b/content/en/docs/contribute/style/page-templates.md
new file mode 100644
index 000000000..97d639836
--- /dev/null
+++ b/content/en/docs/contribute/style/page-templates.md
@@ -0,0 +1,245 @@
+---
+title: Using Page Templates
+content_template: templates/concept
+weight: 30
+---
+
+{{% capture overview %}}
+
+When contributing new topics, apply one of the following templates to them.
+This standardizes the user experience of a given page.
+
+The page templates are in the
+[`layouts/partials/templates`](https://git.k8s.io/website/layouts/partials/templates)
+directory of the [`kubernetes/website`](https://github.com/kubernetes/website)
+repository.
+
+{{< note >}}
+**Note**: Every new topic needs to use a template. If you are unsure which
+template to use for a new topic, start with the
+[concept template](#concept-template).
+{{< /note >}}
+
+
+{{% /capture %}}
+
+{{< toc >}}
+
+{{% capture body %}}
+
+## Concept template
+
+A concept page explains some aspect of Kubernetes. For example, a concept
+page might describe the Kubernetes Deployment object and explain the role it
+plays as an application is deployed, scaled, and updated. Typically, concept
+pages don't include sequences of steps, but instead provide links to tasks or
+tutorials.
+
+
+To write a new concept page, create a Markdown file in a subdirectory of the
+`/content/en/docs/concepts` directory, with the following characteristics:
+
+- In the page's YAML front-matter, set `content_template: templates/concept`.
+- In the page's body, set the required `capture` variables and any optional
+ ones you want to include:
+
+ | Variable | Required? |
+ |---------------|-----------|
+ | overview | yes |
+ | body | yes |
+ | whatsnext | no |
+
+ The page's body will look like this (remove any optional captures you don't
+ need):
+
+ ```
+ {% raw %}
+
+ {{%/* capture overview */%}}
+
+ {{%/* /capture */%}}
+
+ {{* toc */>}}
+
+ {{%/* capture body */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture whatsnext */%}}
+
+ {{%/* /capture */%}}
+
+ {% endraw %}
+ ```
+
+- Within each section, write your content. Use the following guidelines:
+ - Use a minimum of H2 headings (with two leading `#` characters). The sections
+ themselves are titled automatically by the template.
+ - For `overview`, use a paragraph to set context for the entire topic.
+ - Add the `{{< toc >}}` shortcode to show an in-page table of contents.
+ - For `body`, explain the concept using free-form Markdown.
+ - For `whatsnext`, give a bullet list of up to 5 topics the reader might be
+ interested in reading next.
+
+An example of a published topic that uses the concept template is
+[Annotations](/docs/concepts/overview/working-with-objects/annotations/). The
+page you are currently reading also uses the content template.
+
+## Task template
+
+A task page shows how to do a single thing, typically by giving a short
+sequence of steps. Task pages have minimal explanation, but often provide links
+to conceptual topics that provide related background and knowledge.
+
+To write a new task page, create a Markdown file in a subdirectory of the
+`/content/en/docs/tasks` directory, with the following characteristics:
+
+- In the page's YAML front-matter, set `content_template: templates/task`.
+- In the page's body, set the required `capture` variables and any optional
+ ones you want to include:
+
+ | Variable | Required? |
+ |---------------|-----------|
+ | overview | yes |
+ | prerequisites | yes |
+ | steps | no |
+ | discussion | no |
+ | whatsnext | no |
+
+ The page's body will look like this (remove any optional captures you don't
+ need):
+
+ ```
+ {% raw %}
+
+ {{%/* capture overview */%}}
+
+ {{%/* /capture */%}}
+
+ {{* toc */>}}
+
+ {{%/* capture prerequisites */%}}
+
+ {{* include "task-tutorial-prereqs.md" */>}} {{* version-check */>}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture steps */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture discussion */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture whatsnext */%}}
+
+ {{%/* /capture */%}}
+
+ {% endraw %}
+ ```
+
+- Within each section, write your content. Use the following guidelines:
+ - Use a minimum of H2 headings (with two leading `#` characters). The sections
+ themselves are titled automatically by the template.
+ - For `overview`, use a paragraph to set context for the entire topic.
+ - Add the `{{< toc >}}` shortcode to show an in-page table of contents.
+ - For `prerequisites`, use bullet lists when possible. Add additional
+ prerequisites below the ones included by the `include` in the example
+ above. The default prerequisites include a running Kubernetes cluster.
+ - For `steps`, use numbered lists.
+ - For discussion, use normal content to expand upon the information covered
+ in `steps`.
+ - For `whatsnext`, give a bullet list of up to 5 topics the reader might be
+ interested in reading next.
+
+An example of a published topic that uses the task template is [Using an HTTP proxy to access the Kubernetes API](/docs/tasks/access-kubernetes-api/http-proxy-access-api).
+
+## Tutorial template
+
+A tutorial page shows how to accomplish a goal that is larger than a single
+task. Typically a tutorial page has several sections, each of which has a
+sequence of steps. For example, a tutorial might provide a walkthrough of a
+code sample that illustrates a certain feature of Kubernetes. Tutorials can
+include surface-level explanations, but should link to related concept topics
+for deep explanations.
+
+To write a new tutorial page, create a Markdown file in a subdirectory of the
+`/content/en/docs/tutorials` directory, with the following characteristics:
+
+- In the page's YAML front-matter, set `content_template: templates/tutorial`.
+- In the page's body, set the required `capture` variables and any optional
+ ones you want to include:
+
+ | Variable | Required? |
+ |---------------|-----------|
+ | overview | yes |
+ | prerequisites | yes |
+ | objectives | yes |
+ | lessoncontent | yes |
+ | cleanup | no |
+ | whatsnext | no |
+
+ The page's body will look like this (remove any optional captures you don't
+ need):
+
+ ```
+ {% raw %}
+
+ {{%/* capture overview */%}}
+
+ {{%/* /capture */%}}
+
+ {{* toc */>}}
+
+ {{%/* capture prerequisites */%}}
+
+ {{* include "task-tutorial-prereqs.md" */>}} {{* version-check */>}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture objectives */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture lessoncontent */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture cleanup */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture whatsnext */%}}
+
+ {{%/* /capture */%}}
+ {% endraw %}
+ ```
+
+- Within each section, write your content. Use the following guidelines:
+ - Use a minimum of H2 headings (with two leading `#` characters). The sections
+ themselves are titled automatically by the template.
+ - For `overview`, use a paragraph to set context for the entire topic.
+ - Add the `{{< toc >}}` shortcode to show an in-page table of contents.
+ - For `prerequisites`, use bullet lists when possible. Add additional
+ prerequisites below the ones included by default.
+ - For `objectives`, use bullet lists.
+ - For `lessoncontent`, use a mix of numbered lists and narrative content as
+ appropriate.
+ - For `cleanup`, use numbered lists to describe the steps to clean up the
+ state of the cluster after finishing the task.
+ - For `whatsnext`, give a bullet list of up to 5 topics the reader might be
+ interested in reading next.
+
+An example of a published topic that uses the tutorial template is
+[Running a Stateless Application Using a Deployment](/docs/tutorials/stateless-application/run-stateless-application-deployment/).
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+- Learn about the [style guide](/docs/contribute/style/style-guide/)
+- Learn about [content organization](/docs/contribute/style/content-organization/)
+
+{{% /capture %}}
+
diff --git a/content/en/docs/home/contribute/style-guide.md b/content/en/docs/contribute/style/style-guide.md
similarity index 96%
rename from content/en/docs/home/contribute/style-guide.md
rename to content/en/docs/contribute/style/style-guide.md
index df7933ab7..402ee0fa9 100644
--- a/content/en/docs/home/contribute/style-guide.md
+++ b/content/en/docs/contribute/style/style-guide.md
@@ -1,10 +1,10 @@
---
title: Documentation Style Guide
+linktitle: Style guide
content_template: templates/concept
+weight: 10
---
-
-
{{% capture overview %}}
This page gives writing style guidelines for the Kubernetes documentation.
These are guidelines, not rules. Use your best judgment, and feel free to
@@ -12,14 +12,15 @@ propose changes to this document in a pull request.
For additional information on creating new content for the Kubernetes
docs, follow the instructions on
-[using page templates](/docs/home/contribute/page-templates/) and
-[creating a documentation pull request](/docs/home/contribute/create-pull-request/).
+[using page templates](/docs/contribute/style/page-templates/) and
+[creating a documentation pull request](/docs/contribute/start/#improve-existing-content).
{{% /capture %}}
{{% capture body %}}
{{< note >}}
-**Note:** Kubernetes documentation uses [Blackfriday Markdown Renderer](https://github.com/russross/blackfriday) along with a few [Hugo Shortcodes](/docs/home/contribute/includes/) to support glossary entries, tabs, and representing feature state.
+**Note:** Kubernetes documentation uses [Blackfriday Markdown Renderer](https://github.com/russross/blackfriday) along with a few [Hugo Shortcodes](/docs/home/contribute/includes/) to support glossary entries, tabs,
+and representing feature state.
{{< /note >}}
## Language
@@ -83,7 +84,7 @@ represents.
Do
Don't
Open the envars.yaml file.
Open the envars.yaml file.
Go to the /docs/tutorials directory.
Go to the /docs/tutorials directory.
-
Open the /_data/concepts.yaml file.
Open the /_data/concepts.yaml file.
+
Open the /_data/concepts.yaml file.
Open the /_data/concepts.yaml file.
### Use the international standard for punctuation inside quotes
diff --git a/content/en/docs/home/contribute/write-new-topic.md b/content/en/docs/contribute/style/write-new-topic.md
similarity index 69%
rename from content/en/docs/home/contribute/write-new-topic.md
rename to content/en/docs/contribute/style/write-new-topic.md
index 9143f87e6..34b3b42b5 100644
--- a/content/en/docs/home/contribute/write-new-topic.md
+++ b/content/en/docs/contribute/style/write-new-topic.md
@@ -1,6 +1,7 @@
---
-title: Writing a New Topic
+title: Writing a new topic
content_template: templates/task
+weight: 20
---
{{% capture overview %}}
@@ -9,7 +10,7 @@ This page shows how to create a new topic for the Kubernetes docs.
{{% capture prerequisites %}}
Create a fork of the Kubernetes documentation repository as described in
-[Creating a Documentation Pull Request](/docs/home/contribute/create-pull-request/).
+[Start contributing](/docs/contribute/start/).
{{% /capture %}}
{{% capture steps %}}
@@ -21,6 +22,11 @@ is the best fit for your content:
+
+
Concept
+
A concept page explains some aspect of Kubernetes. For example, a concept page might describe the Kubernetes Deployment object and explain the role it plays as an application is deployed, scaled, and updated. Typically, concept pages don't include sequences of steps, but instead provide links to tasks or tutorials. For an example of a concept topic, see Nodes.
+
+
Task
A task page shows how to do a single thing. The idea is to give readers a sequence of steps that they can actually do as they read the page. A task page can be short or long, provided it stays focused on one area. In a task page, it is OK to blend brief explanations with the steps to be performed, but if you need to provide a lengthy explanation, you should do that in a concept topic. Related task and concept topics should link to each other. For an example of a short task page, see Configure a Pod to Use a Volume for Storage. For an example of a longer task page, see Configure Liveness and Readiness Probes
@@ -31,17 +37,12 @@ is the best fit for your content:
A tutorial page shows how to accomplish a goal that ties together several Kubernetes features. A tutorial might provide several sequences of steps that readers can actually do as they read the page. Or it might provide explanations of related pieces of code. For example, a tutorial could provide a walkthrough of a code sample. A tutorial can include brief explanations of the Kubernetes features that are being tied together, but should link to related concept topics for deep explanations of individual features.
-
-
Concept
-
A concept page explains some aspect of Kubernetes. For example, a concept page might describe the Kubernetes Deployment object and explain the role it plays as an application is deployed, scaled, and updated. Typically, concept pages don't include sequences of steps, but instead provide links to tasks or tutorials. For an example of a concept topic, see Nodes.
-
-
-Each page type has a
-[template](/docs/home/contribute/page-templates/)
-that you can use as you write your topic.
-Using templates helps ensure consistency among topics of a given type.
+Use a template for each new page. Each page type has a
+[template](/docs/contribute/style/page-templates/)
+that you can use as you write your topic. Using templates helps ensure
+consistency among topics of a given type.
## Choosing a title and filename
@@ -70,24 +71,29 @@ triple-dashed lines at the top of the page. Here's an example:
Depending on your page type, put your new file in a subdirectory of one of these:
-* /docs/tasks/
-* /docs/tutorials/
-* /docs/concepts/
+* /content/en/docs/tasks/
+* /content/en/docs/tutorials/
+* /content/en/docs/concepts/
You can put your file in an existing subdirectory, or you can create a new
subdirectory.
-## Creating an entry in the table of contents
+## Placing your topic in the table of contents
-Depending page type, create an entry in one of these files:
+The table of contents is built dynamicaly using the directory structure of the
+documentation source. The top-level directories under `/content/en/docs/` create
+top-level navigation, and subdirectories each have entries in the table of
+contents.
-* /_data/tasks.yaml
-* /_data/tutorials.yaml
-* /_data/concepts.yaml
+Each subdirectory has a file `_index.md`, which represents the "home" page for
+a given subdirectory's content. The `_index.md` does not need a template. It
+can contain overview content about the topics in the subdirectory.
-Here's an example of an entry in /_data/tasks.yaml:
-
- - docs/tasks/configure-pod-container/configure-volume-storage.md
+Other files in a directory are sorted alphabetically by default. This is almost
+never the best order. To control the relative sorting of topics in a
+subdirectory, set the `weight:` front-matter key to an integer. Typically, we
+use multiples of 10, to account for adding topics later. For instance, a topic
+with weight `10` will come before one with weight `20`.
## Embedding code in your topic
@@ -95,43 +101,51 @@ If you want to include some code in your topic, you can embed the code in your
file directly using the markdown code block syntax. This is recommended for the
following cases (not an exhaustive list):
-- The code is showing the output from a command such as
+- The code shows the output from a command such as
`kubectl get deploy mydeployment -o json | jq '.status'`.
-- The code is not generic enough for users to try out. As an example, the YAML
+- The code is not generic enough for users to try out. As an example, you can
+ embed the YAML
file for creating a Pod which depends on a specific
- [FlexVolume](/docs/concepts/storage/volumes#flexvolume) implementation can be
- directly embedded into the topic when appropriate.
+ [FlexVolume](/docs/concepts/storage/volumes#flexvolume) implementation.
- The code is an incomplete example because its purpose is to highlight a
- portion of an otherwise large file. For example, when describing ways to
+ portion of a larger file. For example, when describing ways to
customize the [PodSecurityPolicy](/docs/tasks/administer-cluster/sysctl-cluster/#podsecuritypolicy)
- for some reasons, you may provide a short snippet directly in your topic file.
+ for some reasons, you can provide a short snippet directly in your topic file.
- The code is not meant for users to try out due to other reasons. For example,
when describing how a new attribute should be added to a resource using the
- `kubectl edit` command, you may provide a short example that includes only
+ `kubectl edit` command, you can provide a short example that includes only
the attribute to add.
## Including code from another file
Another way to include code in your topic is to create a new, complete sample
-file (or a group of sample files) and then reference the sample(s) from your
-topic. This is the preferred way of including sample YAML files when the sample
-is generic, reusable, and you want the readers to try it out by themselves.
+file (or group of sample files) and then reference the sample from your topic.
+Use this method to include sample YAML files when the sample is generic and
+reusable, and you want the reader to try it out themselves.
-When adding a new standalone sample file (e.g. a YAML file), place the code in
+When adding a new standalone sample file, such as a YAML file, place the code in
one of the `/examples/` subdirectories where `` is the language for
the topic. In your topic file, use the `codenew` shortcode:
{{< codenew file="<RELPATH>/my-example-yaml>" >}}
-where `` is the path to the file you're including, relative to the
-`examples` directory. For example, the following short code references a YAML
-file located at `content/en/examples/pods/storage/gce-volume.yaml`.
+where `` is the path to the file to include, relative to the
+`examples` directory. The following Hugo shortcode references a YAML
+file located at `/content/en/examples/pods/storage/gce-volume.yaml`.
-
+```none
+{{* codenew file="pods/storage/gce-volume.yaml" */>}}
+```
+
+{{< note >}}
+**Note**: To show raw Hugo shortcodes as in the above example and prevent Hugo
+from interpreting them, use C-style comments directly after the `<` and before
+the `>` characters. View the code for this page for an example.
+{{< /note >}}
## Showing how to create an API object from a configuration file
-If you need to show the reader how to create an API object based on a
+If you need to demonstrate how to create an API object based on a
configuration file, place the configuration file in one of the subdirectories
under `/examples`.
@@ -142,7 +156,7 @@ kubectl create -f https://k8s.io/examples/pods/storage/gce-volume.yaml
```
{{< note >}}
-**NOTE**: When adding new YAML files to the `/examples` directory, make
+**Note**: When adding new YAML files to the `/examples` directory, make
sure the file is also included into the `/examples_test.go` file. The
Travis CI for the Website automatically runs this test case when PRs are
submitted to ensure all examples pass the tests.
diff --git a/content/en/docs/doc-contributor-tools/snippets/README.md b/content/en/docs/doc-contributor-tools/snippets/README.md
index fac2332f3..3706fe856 100644
--- a/content/en/docs/doc-contributor-tools/snippets/README.md
+++ b/content/en/docs/doc-contributor-tools/snippets/README.md
@@ -47,5 +47,5 @@ Placeholder text is included.
1. Develop the snippet locally and verify that it works as expected.
2. Copy the template's code into the `atom-snippets.cson` file on Github. Raise a
- pull request, and ask for review from another Atom user in #sig-docs on
+ pull request, and ask for review from another Atom user in `#sig-docs` on
Kubernetes Slack.
\ No newline at end of file
diff --git a/content/en/docs/editdocs.md b/content/en/docs/editdocs.md
deleted file mode 100644
index a7de09377..000000000
--- a/content/en/docs/editdocs.md
+++ /dev/null
@@ -1,65 +0,0 @@
----
-layout: docwithnav
-title: Contributing to the Kubernetes Documentation
----
-
-
-
-
-
-
-
Continue your edit
-
-
To make changes to the document, do the following:
-
-
-
Click the button below to edit the page you were just on.
-
Click Commit Changes at the bottom of the screen to create a copy of our site in your GitHub account called a fork.
-
You can make other changes in your fork after it is created, if you want.
-
On the index page, click New Pull Request to let us know about it.
-
-
-
-
-
-
-
-
-
Edit our site in the cloud
-
-
Click the button below to visit the repo for our site. You can then click the Fork button in the upper-right area of the screen to create a copy of our site in your GitHub account called a fork. Make any changes you want in your fork, and when you are ready to send those changes to us, go to the index page for your fork and click New Pull Request to let us know about it.
-
-
-
-
-For more information about contributing to the Kubernetes documentation, see:
-
-* [Creating a Documentation Pull Request](/docs/home/contribute/create-pull-request/)
-* [Writing a New Topic](/docs/home/contribute/write-new-topic/)
-* [Staging Your Documentation Changes](/docs/home/contribute/stage-documentation-changes/)
-* [Using Page Templates](/docs/home/contribute/page-templates/)
-* [Documentation Style Guide](/docs/home/contribute/style-guide/)
-* How to work with generated documentation
- * [Generating Reference Documentation for Kubernetes Federation API](/docs/home/contribute/generated-reference/federation-api/)
- * [Generating Reference Documentation for kubectl Commands](/docs/home/contribute/generated-reference/kubectl/)
- * [Generating Reference Documentation for the Kubernetes API](/docs/home/contribute/generated-reference/kubernetes-api/)
- * [Generating Reference Pages for Kubernetes Components and Tools](/docs/home/contribute/generated-reference/kubernetes-components/)
diff --git a/content/en/docs/getting-started-guides/ubuntu/operational-considerations.md b/content/en/docs/getting-started-guides/ubuntu/operational-considerations.md
index c93b81cc5..8970df854 100644
--- a/content/en/docs/getting-started-guides/ubuntu/operational-considerations.md
+++ b/content/en/docs/getting-started-guides/ubuntu/operational-considerations.md
@@ -147,8 +147,7 @@ htpasswd -c -b -B htpasswd userA passwordA
Assuming that your registry will be reachable at ```myregistry.company.com```,
you already have your TLS key in the ```registry.key``` file, and your TLS
-certificate (with ```myregistry.company.com``` as Common Name) in the
-```registry.crt``` file, you would then run:
+certificate (with ```myregistry.company.com``` as Common Name) in the ```registry.crt``` file, you would then run:
```
juju run-action kubernetes-worker/0 registry domain=myregistry.company.com htpasswd="$(base64 -w0 htpasswd)" htpasswd-plain="$(base64 -w0 htpasswd-plain)" tlscert="$(base64 -w0 registry.crt)" tlskey="$(base64 -w0 registry.key)" ingress=true
diff --git a/content/en/docs/home/contribute/_index.md b/content/en/docs/home/contribute/_index.md
deleted file mode 100755
index 15c469efc..000000000
--- a/content/en/docs/home/contribute/_index.md
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: "Contributing to the Kubernetes Docs"
-weight: 40
----
-
diff --git a/content/en/docs/home/contribute/blog-post.md b/content/en/docs/home/contribute/blog-post.md
deleted file mode 100644
index 9ec95b53a..000000000
--- a/content/en/docs/home/contribute/blog-post.md
+++ /dev/null
@@ -1,134 +0,0 @@
----
-title: Writing a Blog Post
-reviewers:
-- zacharysarah
-- kbarnard10
-- sarahkconway
-content_template: templates/task
----
-
-{{% capture overview %}}
-This page shows you how to submit a post for the [Kubernetes Blog](https://kubernetes.io/blog).
-
-You’ll receive a response within 5 business days on whether your submission is approved and information about next steps, if any.
-{{% /capture %}}
-
-{{% capture prerequisites %}}
-To create a new blog post, you can either:
-
-- Fill out the [Kubernetes Blog Submission](https://docs.google.com/forms/d/e/1FAIpQLSch_phFYMTYlrTDuYziURP6nLMijoXx_f7sLABEU5gWBtxJHQ/viewform) form.
-
-or:
-
-- Open a pull request against this repository as described in
-[Creating a Documentation Pull Request](/docs/home/contribute/create-pull-request/)
-{{% /capture %}}
-
-{{% capture steps %}}
-## Kubernetes Blog guidelines
-
-All content must be original. The Kubernetes Blog does not post material previously published elsewhere.
-
-Suitable Content (with examples):
-
-- User case studies (Yahoo Japan, Bitmovin)
-- New Kubernetes capabilities (5-days-of-k8s)
-- Kubernetes projects updates (kompose)
-- Updates from Special Interest Groups (SIG-OpenStack)
-- Tutorials and walkthroughs (PostgreSQL w/ StatefulSets)
-- Thought leadership around Kubernetes (CaaS, the foundation for next generation PaaS)
-- Kubernetes Partner OSS integration (Fission)
-
-Unsuitable Content:
-
-- Vendor product pitches
-- Partner updates without an integration and customer story
-- Syndicated posts (language translations are permitted)
-
-## Create a blog post with a form
-
-Open the [Kubernetes Blog Submission](https://docs.google.com/forms/d/e/1FAIpQLSch_phFYMTYlrTDuYziURP6nLMijoXx_f7sLABEU5gWBtxJHQ/viewform) form, fill it out, and click Submit.
-
-## Create a post by opening a pull request
-
-### Add a new Markdown file
-
-Add a new Markdown (`*.md`) to `/blog/_posts/`.
-
-Name the file using the following format:
-```
-YYYY-MM-DD-Title.md
-```
-For example:
-```
-2015-03-20-Welcome-to-the-Kubernetes-Blog.md
-```
-
-### Add front matter to the file
-
-Add the following block to the top of the new file:
-```
----
-layout: blog
-title:
-date:
----
-```
-
-For example:
-```
----
-layout: blog
-title: Welcome to the Kubernetes Blog!
-date: Saturday, March 20, 2015
----
-```
-
-### Create a new pull request (PR)
-
-When you [create a new pull request](/docs/home/contribute/create-pull-request/), include the following in the PR description:
-
-{{< note >}}
-- Desired publishing date
-**Note:** PRs must include complete drafts no later than 15 days prior to the desired publication date.
-{{< /note >}}
-- Author information:
- - Name
- - Title
- - Company
- - Contact email
-
-### Add content to the file
-
-Write your post using the following guidelines.
-
-### Add images
-
-Add any image files the post contains to `/static/images/blog/`.
-
-The preferred image format is SVG.
-
-Add the proposed date of your blog post to the title of any image files the post contains:
-```
-YYYY-MM-DD-image.svg
-```
-For example:
-```
-2018-03-01-cncf-color.svg
-```
-
-Please use [reference-style image links][ref-style] to keep posts readable.
-
-Here's an example of how to include an image in a blog post:
-
-```
-Check out the ![CNCF logo][cncf-logo].
-
-[cncf-logo]: /images/blog/2018-03-01-cncf-color.svg
-```
-
-{{% /capture %}}
-
-
-
-[ref-style]: https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet#images
diff --git a/content/en/docs/home/contribute/create-pull-request.md b/content/en/docs/home/contribute/create-pull-request.md
deleted file mode 100644
index e1a6918a5..000000000
--- a/content/en/docs/home/contribute/create-pull-request.md
+++ /dev/null
@@ -1,122 +0,0 @@
----
-title: Creating a Documentation Pull Request
-content_template: templates/task
----
-
-{{% capture overview %}}
-
-To contribute to the Kubernetes documentation, create a pull request against the
-kubernetes/website
-repository. This page shows how to create a pull request.
-
-{{% /capture %}}
-
-{{% capture prerequisites %}}
-
-1. Create a Github account.
-
-1. Sign the
- Linux Foundation Contributor License Agreement (CLA).
-
-Documentation will be published under the [CC BY SA 4.0](https://git.k8s.io/website/LICENSE) license.
-
-{{% /capture %}}
-
-{{% capture steps %}}
-
-## Creating a fork of the Kubernetes documentation repository
-
-1. Go to the
- kubernetes/website
- repository.
-
-1. In the upper-right corner, click **Fork**. This creates a copy of the
-Kubernetes documentation repository in your GitHub account. The copy
-is called a *fork*.
-
-## Making your changes
-
-1. In your GitHub account, in your fork of the Kubernetes docs, create
-a new branch to use for your contribution.
-
-1. In your new branch, make your changes and commit them. If you want to
-[write a new topic](/docs/home/contribute/write-new-topic/),
-choose the
-[page type](/docs/home/contribute/page-templates/)
-that is the best fit for your content.
-
-## Viewing your changes locally
-
-You can use Hugo to see a preview of your changes locally.
-
-1. [Install Hugo](https://gohugo.io/getting-started/installing/)
-version 0.40.3 or later.
-
-1. Go to the root directory of your clone of the Kubernetes docs, and
-enter this command:
-
- hugo server
-
-1. In your browser's address bar, enter `localhost:1313`.
-
-## Viewing your changes in the Netlify preview
-
-When you submit a pull request, you can see a preview of your changes at
-[Netlify](https://www.netlify.com/). In your pull request, at the bottom,
-to the right of **deploy/netlify**, click **Details**. Also, there is often
-a link to the Netlify preview in the pull request comments.
-
-## Submitting a pull request to the master branch (Current Release)
-
-If you want your change to be published in the released version Kubernetes docs,
-create a pull request against the master branch of the Kubernetes
-documentation repository.
-
-1. In your GitHub account, in your new branch, create a pull request
-against the master branch of the kubernetes/website
-repository. This opens a page that shows the status of your pull request.
-
-1. Click **Show all checks**. Wait for the **deploy/netlify** check to complete.
-To the right of **deploy/netlify**, click **Details**. This opens a staging
-site where you can verify that your changes have rendered correctly.
-
-1. During the next few days, check your pull request for reviewer comments.
-If needed, revise your pull request by committing changes to your
-new branch in your fork.
-
-## Submitting a pull request to the <vnext> branch (Upcoming Release)
-
-If your documentation change should not be released until the next release of
-the Kubernetes product, create a pull request against the <vnext> branch
-of the Kubernetes documentation repository. The <vnext> branch has the
-form `release-`, for example release-1.5.
-
-1. In your GitHub account, in your new branch, create a pull request
-against the <vnext> branch of the kubernetes/website
-repository. This opens a page that shows the status of your pull request.
-
-1. Click **Show all checks**. Wait for the **deploy/netlify** check to complete.
-To the right of **deploy/netlify**, click **Details**. This opens a staging
-site where you can verify that your changes have rendered correctly.
-
-1. During the next few days, check your pull request for reviewer comments.
-If needed, revise your pull request by committing changes to your
-new branch in your fork.
-
-The staging site for the upcoming Kubernetes release is here:
-[http://kubernetes-io-vnext-staging.netlify.com/](http://kubernetes-io-vnext-staging.netlify.com/).
-The staging site reflects the current state of what's been merged in the
-release branch, or in other words, what the docs will look like for the
-next upcoming release. It's automatically updated as new PRs get merged.
-
-## Pull request review process for both Current and Upcoming Releases
-Once your pull request is created, a Kubernetes reviewer will take responsibility for providing clear, actionable feedback. As the owner of the pull request, **it is your responsibility to modify your pull request to address the feedback that has been provided to you by the Kubernetes reviewer.** Also note that you may end up having more than one Kubernetes reviewer provide you feedback or you may end up getting feedback from a Kubernetes reviewer that is different than the one originally assigned to provide you feedback. Furthermore, in some cases, one of your reviewers might ask for a technical review from a [Kubernetes tech reviewer](https://github.com/kubernetes/website/wiki/Tech-reviewers) when needed. Reviewers will do their best to provide feedback in a timely fashion but response time can vary based on circumstances.
-
-{{% /capture %}}
-
-{{% capture whatsnext %}}
-* Learn about [writing a new topic](/docs/home/contribute/write-new-topic/).
-* Learn about [using page templates](/docs/home/contribute/page-templates/).
-{{% /capture %}}
-
-
diff --git a/content/en/docs/home/contribute/page-templates.md b/content/en/docs/home/contribute/page-templates.md
deleted file mode 100644
index 5ca04c2c8..000000000
--- a/content/en/docs/home/contribute/page-templates.md
+++ /dev/null
@@ -1,231 +0,0 @@
----
-title: Using Page Templates
----
-
-
-
-
These page templates are available for writers who would like to contribute new topics to the Kubernetes docs:
A task page shows how to do a single thing, typically by giving a short
-sequence of steps. Task pages have minimal explanation, but often provide links
-to conceptual topics that provide related background and knowledge.
-
-
To write a new task page, create a Markdown file in a subdirectory of the
-/docs/tasks directory. In your Markdown file, provide values for these
-variables:
-
-
-
overview - required
-
prerequisites - required
-
steps - required
-
discussion - optional
-
whatsnext - optional
-
-
-
Then include templates/task.md like this:
-
-{% raw %}
...
-{% include templates/task.md %}
{% endraw %}
-
-
In the steps section, use ## to start with a level-two heading. For subheadings,
-use ### and #### as needed. Similarly, if you choose to have a discussion section,
-start the section with a level-two heading.
-
-
Here's an example of a Markdown file that uses the task template:
-
-{% raw %}
-
---
-title: Configuring This Thing
----
-
-{% capture overview %}
-This page shows how to ...
-{% endcapture %}
-
-{% capture prerequisites %}
-* Do this.
-* Do this too.
-{% endcapture %}
-
-{% capture steps %}
-## Doing ...
-
-1. Do this.
-1. Do this next. Possibly read this [related explanation](...).
-{% endcapture %}
-
-{% capture discussion %}
-## Understanding ...
-
-Here's an interesting thing to know about the steps you just did.
-{% endcapture %}
-
-{% capture whatsnext %}
-* Learn more about [this](...).
-* See this [related task](...).
-{% endcapture %}
-
-{% include templates/task.md %}
-{% endraw %}
-
-
Here's an example of a published topic that uses the task template:
A tutorial page shows how to accomplish a goal that is larger than a single
-task. Typically a tutorial page has several sections, each of which has a
-sequence of steps. For example, a tutorial might provide a walkthrough of a
-code sample that illustrates a certain feature of Kubernetes. Tutorials can
-include surface-level explanations, but should link to related concept topics
-for deep explanations.
-
-
To write a new tutorial page, create a Markdown file in a subdirectory of the
-/docs/tutorials directory. In your Markdown file, provide values for these
-variables:
-
-
-
overview - required
-
prerequisites - required
-
objectives - required
-
lessoncontent - required
-
cleanup - optional
-
whatsnext - optional
-
-
-
Then include templates/tutorial.md like this:
-
-{% raw %}
...
-{% include templates/tutorial.md %}
{% endraw %}
-
-
In the lessoncontent section, use ## to start with a level-two heading. For subheadings,
-use ### and #### as needed.
-
-
Here's an example of a Markdown file that uses the tutorial template:
-
-{% raw %}
-
---
-title: Running a Thing
----
-
-{% capture overview %}
-This page shows how to ...
-{% endcapture %}
-
-{% capture prerequisites %}
-* Do this.
-* Do this too.
-{% endcapture %}
-
-{% capture objectives %}
-* Learn this.
-* Build this.
-* Run this.
-{% endcapture %}
-
-{% capture lessoncontent %}
-## Building ...
-
-1. Do this.
-1. Do this next. Possibly read this [related explanation](...).
-
-## Running ...
-
-1. Do this.
-1. Do this next.
-
-## Understanding the code
-Here's something interesting about the code you ran in the preceding steps.
-{% endcapture %}
-
-{% capture cleanup %}
-* Delete this.
-* Stop this.
-{% endcapture %}
-
-{% capture whatsnext %}
-* Learn more about [this](...).
-* See this [related tutorial](...).
-{% endcapture %}
-
-{% include templates/tutorial.md %}
-{% endraw %}
-
-
Here's an example of a published topic that uses the tutorial template:
A concept page explains some aspect of Kubernetes. For example, a concept
-page might describe the Kubernetes Deployment object and explain the role it
-plays as an application is deployed, scaled, and updated. Typically, concept
-pages don't include sequences of steps, but instead provide links to tasks or
-tutorials.
-
-
To write a new concept page, create a Markdown file in a subdirectory of the
-/docs/concepts directory. In your Markdown file, provide values for these
-variables:
-
-
-
overview - required
-
body - required
-
whatsnext - optional
-
-
-
Then include templates/concept.md like this:
-
-{% raw %}
...
-{% include templates/concept.md %}
{% endraw %}
-
-
In the body section, use ## to start with a level-two heading. For subheadings,
-use ### and #### as needed.
-
-
Here's an example of a page that uses the concept template:
-
-{% raw %}
-
---
-title: Understanding this Thing
----
-
-{% capture overview %}
-This page explains ...
-{% endcapture %}
-
-{% capture body %}
-## Understanding ...
-
-Kubernetes provides ...
-
-## Using ...
-
-To use ...
-{% endcapture %}
-
-{% capture whatsnext %}
-* Learn more about [this](...).
-* See this [related task](...).
-{% endcapture %}
-
-{% include templates/concept.md %}
-{% endraw %}
-
-
Here's an example of a published topic that uses the concept template:
-
-
-
diff --git a/content/en/docs/home/contribute/participating.md b/content/en/docs/home/contribute/participating.md
deleted file mode 100644
index 565d1c8db..000000000
--- a/content/en/docs/home/contribute/participating.md
+++ /dev/null
@@ -1,114 +0,0 @@
----
-title: Participating in SIG-DOCS
-content_template: templates/concept
----
-
-{{% capture overview %}}
-
-SIG-DOCS is one of the [special interest groups](https://github.com/kubernetes/community/blob/master/sig-list.md) within the Kubernetes project, focused on writing, updating, and maintaining the documentation for Kubernetes as a whole.
-
-{{% /capture %}}
-
-{{% capture body %}}
-
-SIG Docs welcomes content and reviews from all contributors. Anyone can open a pull request (PR), and anyone is welcome to comment on content or pull requests in progress.
-
-Within the Kubernetes project, you may also become a member, reviewer, or approver.
-These roles confer additional privileges and responsibilities when it comes to approving and committing changes.
-See [community-membership](https://github.com/kubernetes/community/blob/master/community-membership.md) for more information on how membership works within the Kubernetes community.
-
-## Roles and Responsibilities
-
-The automation reads `/hold`, `/lgtm`, and `/approve` comments and sets labels on the pull request.
-When a pull request has the `lgtm` and `approve` labels without any `hold` labels, the pull request merges automatically.
-Kubernetes org members, and reviewers and approvers for SIG Docs can add comments to control the merge automation.
-
-- Members
-
- Any member of the [Kubernetes organization](https://github.com/kubernetes) can review a pull request, and SIG Docs team members frequently request reviews from members of other SIGs for technical accuracy.
- SIG Docs also welcomes reviews and feedback regardless of Kubernetes org membership.
- You can indicate your approval by adding a comment of `/lgtm` to a pull request.
-
-- Reviewers
-
- Reviewers are individuals who review documentation pull requests.
-
- Automation assigns reviewers to pull requests, and contributors can request a review with a comment on the pull request: `/assign [@_github_handle]`.
- To indicate that a pull request requires no further changes, a reviewer should add comment to the pull request `/lgtm`.
- A reviewer indicates technical accuracy with a `/lgtm` comment.
-
- Reviewers can add a `/hold` comment to prevent the pull request from being merged.
- Another reviewer or approver can remove a hold with the comment: `/hold cancel`.
-
- When a reviewer is assigned a pull request to review it is not a sole responsibility, and any other reviewer may also offer their opinions on the pull request.
- If a reviewer is requested, it is generally expected that the PR will be left to that reviewer to do their editorial pass on the content.
- If a PR author or SIG Docs maintainer requests a review, refrain from merging or closing the PR until the requested reviewer completes their review.
-
-- Approvers
-
- Approvers have the ability to merge a PR.
-
- Approvers can indicate their approval with a comment to the pull request: `/approve`.
- An approver is indicating editorial approval with the an `/approve` comment.
-
- Approvers can add a `/hold` comment to prevent the pull request from being merged.
- Another reviewer or approver can remove a hold with the comment: `/hold cancel`.
-
- Approvers may skip further reviews for small pull requests if the proposed changes appear trivial and/or well-understood.
- An approver can indicate `/lgtm` or `/approve` in a PR comment to have a pull request merged, and all pull requests require at least one approver to provide their vote in order for the PR to be merged.
-
- {{< note >}}**Note:** There is a special case when an approver uses the comment: `/lgtm`. In these cases, the automation will add both `lgtm` and `approve` tags, skipping any further review.
- {{< /note >}}
-
- For PRs that require no review (typos or otherwise trivial changes), approvers can enter an `lgtm` comment, indicating no need for further review and flagging the PR with approval to merge.
-
-### Teams and groups within SIG Docs
-
-You can get an overview of [SIG Docs from the community github repo](https://github.com/kubernetes/community/tree/master/sig-docs).
-The SIG Docs group defines two teams on Github:
-
- - [@kubernetes/sig-docs-maintainers](https://github.com/orgs/kubernetes/teams/sig-docs-maintainers)
- - [@kubernetes/sig-docs-pr-reviews](https://github.com/orgs/kubernetes/teams/sig-docs-pr-reviews)
-
-These groups maintain the [Kubernetes website repository](https://github.com/kubernetes/website), which houses the content hosted at this site.
-Both can be referenced with their `@name` in github comments to communicate with everyone in that group.
-
-These teams overlap, but do not exactly match, the groups used by the automation tooling.
-For assignment of issues, pull requests, and to support PR approvals, the automation uses information from the OWNERS file.
-
-To volunteer as a reviewer or approver, make a pull request and add your Github handle to the relevant section in the [OWNERS file](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md).
-
-{{< note >}}
-**Note:** Reviewers and approvers must meet requirements for participation.
-For more information, see the [Kubernetes community](https://github.com/kubernetes/community/blob/master/community-membership.md#membership) repository.
-{{< /note >}}
-
-Documentation for the [OWNERS](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md)
-explains how to maintain OWNERS for each repository that enables it.
-
-The [Kubernetes website repository](https://github.com/kubernetes/website) has two automation (prow) [plugins enabled](https://github.com/kubernetes/test-infra/blob/master/prow/plugins.yaml#L210):
-
-- blunderbuss
-- approve
-
-These two plugins use the [OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS) and [OWNERS_ALIASES](https://github.com/kubernetes/website/blob/master/OWNERS_ALIASES) files in our repo for configuration.
-
-{{% /capture %}}
-
-{{% capture whatsnext %}}
-For more information about contributing to the Kubernetes documentation, see:
-
-* Review the SIG Docs [Style Guide](/docs/home/contribute/style-guide/).
-* Learn how to [stage your documentation changes](/docs/home/contribute/stage-documentation-changes/).
-* Learn about [writing a new topic](/docs/home/contribute/write-new-topic/).
-* Learn about [using page templates](/docs/home/contribute/page-templates/).
-* Learn about [staging your changes](/docs/home/contribute/stage-documentation-changes/).
-* Learn about [creating a pull request](/docs/home/contribute/create-pull-request/).
-* How to generate documentation:
- * Learn how to [generate Reference Documentation for Kubernetes Federation API](/docs/home/contribute/generated-reference/federation-api/)
- * Learn how to [generate Reference Documentation for kubectl Commands](/docs/home/contribute/generated-reference/kubectl/)
- * Learn how to [generate Reference Documentation for the Kubernetes API](/docs/home/contribute/generated-reference/kubernetes-api/)
- * Learn how to [generate Reference Pages for Kubernetes Components and Tools](/docs/home/contribute/generated-reference/kubernetes-components/)
-{{% /capture %}}
-
-
diff --git a/content/en/docs/home/contribute/review-issues.md b/content/en/docs/home/contribute/review-issues.md
deleted file mode 100644
index 69d8605cb..000000000
--- a/content/en/docs/home/contribute/review-issues.md
+++ /dev/null
@@ -1,104 +0,0 @@
----
-title: Reviewing Documentation Issues
-content_template: templates/concept
----
-
-{{% capture overview %}}
-
-This page explains how documentation issues are reviewed and prioritized for the
-kubernetes/website repository. The purpose is to provide a way to organize issues and make it easier to contribute to Kubernetes documentation. The following should be used as the standard way of prioritizing, labeling, and interacting with issues.
-{{% /capture %}}
-
-{{% capture body %}}
-
-## Categorizing issues
-Issues should be sorted into different buckets of work using the following labels and definitions. If an issue doesn't have enough information to identify a problem that can be researched, reviewed, or worked on (i.e. the issue doesn't fit into any of the categories below) you should close the issue with a comment explaining why it is being closed.
-
-### Needs Clarification
-* Issues that need more information from the original submitter to make them actionable. Issues with this label that aren't followed up within a week may be closed.
-
-### Actionable
-* Issues that can be worked on with current information (or may need a comment to explain what needs to be done to make it more clear)
-* Allows contributors to have easy to find issues to work on
-
-
-### Needs Tech Review
-* Issues that need more information in order to be worked on (the proposed solution needs to be proven, a subject matter expert needs to be involved, work needs to be done to understand the problem/resolution and if the issue is still relevant)
-* Promotes transparency about level of work needed for the issue and that issue is in progress
-
-### Needs Docs Review
-* Issues that are suggestions for better processes or site improvements that require community agreement to be implemented
-* Topics can be brought to SIG meetings as agenda items
-
-### Needs UX Review
-* Issues that are suggestions for improving the user interface of the site.
-* Fixing broken site elements.
-
-## Prioritizing Issues
-The following labels and definitions should be used to prioritize issues. If you change the priority of an issues, please comment on the issue with your reasoning for the change.
-
-### P1
-* Major content errors affecting more than 1 page
-* Broken code sample on a heavily trafficked page
-* Errors on a “getting started” page
-* Well known or highly publicized customer pain points
-* Automation issues
-
-### P2
-* Default for all new issues
-* Broken code for sample that is not heavily used
-* Minor content issues in a heavily trafficked page
-* Major content issues on a lower-trafficked page
-
-### P3
-* Typos and broken anchor links
-
-## Handling special issue types
-
-### Duplicate issues
-If a single problem has one or more issues open for it, the problem should be consolidated into a single issue. You should decide which issue to keep open (or open a new issue), port over all relevant information, link related issues, and close all the other issues that describe the same problem. Only having a single issue to work on will help reduce confusion and avoid duplicating work on the same problem.
-
-### Dead link issues
-Depending on where the dead link is reported, different actions are required to resolve the issue. Dead links in the API and Kubectl docs are automation issues and should be assigned a P1 until the problem can be fully understood. All other dead links are issues that need to be manually fixed and can be assigned a P3.
-
-### Support requests or code bug reports
-Some issues opened for docs are instead issues with the underlying code, or requests for assistance when something (like a tutorial) didn't work. For issues unrelated to docs, close the issue with a comment directing the requester to support venues (Slack, Stack Overflow) and, if relevant, where to file an issue for bugs with features (kubernetes/kubernetes is a great place to start).
-
-Sample response to a request for support:
-
-```
-This issue sounds more like a request for support and less
-like an issue specifically for docs. I encourage you to bring
-your question to the `#kubernetes-users` channel in
-[Kubernetes slack](http://slack.k8s.io/). You can also search
-resources like
-[Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
-for answers to similar questions.
-
-You can also open issues for Kubernetes functionality in
- https://github.com/kubernetes/kubernetes.
-
-If this is a documentation issue, please re-open this issue.
-```
-
-Sample code bug report response:
-
-```
-This sounds more like an issue with the code than an issue with
-the documentation. Please open an issue at
-https://github.com/kubernetes/kubernetes/issues.
-
-If this is a documentation issue, please re-open this issue.
-```
-
-{{% /capture %}}
-
-
-
-{{% capture whatsnext %}}
-* Learn about [writing a new topic](/docs/home/contribute/write-new-topic/).
-* Learn about [using page templates](/docs/home/contribute/page-templates/).
-* Learn about [staging your changes](/docs/home/contribute/stage-documentation-changes/).
-{{% /capture %}}
-
-
diff --git a/content/en/docs/imported/release/notes.md b/content/en/docs/imported/release/notes.md
index fee5d6aa6..e6bdd4a06 100644
--- a/content/en/docs/imported/release/notes.md
+++ b/content/en/docs/imported/release/notes.md
@@ -83,8 +83,9 @@ Work this cycle focused on graduating existing functions, and on making security
RBAC [cluster role aggregation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles), introduced in 1.9, graduated to stable status with no changes in 1.11, and [client-go credential plugins](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins) graduated to beta status, while also adding support for obtaining TLS credentials from an external plugin.
Kubernetes 1.11 also makes it easier to see what's happening, as audit events can now be annotated with information about how an API request was handled:
- * Authorization sets `authorization.k8s.io/decision` and `authorization.k8s.io/reason` annotations with the authorization decision ("allow" or "forbid") and a human-readable description of why the decision was made (for example, RBAC includes the name of the role/binding/subject which allowed a request).
- * PodSecurityPolicy admission sets `podsecuritypolicy.admission.k8s.io/admit-policy` and `podsecuritypolicy.admission.k8s.io/validate-policy` annotations containing the name of the policy that allowed a pod to be admitted. (PodSecurityPolicy also gained the ability to [limit hostPath volume mounts to be read-only](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems).)
+
+* Authorization sets `authorization.k8s.io/decision` and `authorization.k8s.io/reason` annotations with the authorization decision ("allow" or "forbid") and a human-readable description of why the decision was made (for example, RBAC includes the name of the role/binding/subject which allowed a request).
+* PodSecurityPolicy admission sets `podsecuritypolicy.admission.k8s.io/admit-policy` and `podsecuritypolicy.admission.k8s.io/validate-policy` annotations containing the name of the policy that allowed a pod to be admitted. (PodSecurityPolicy also gained the ability to [limit hostPath volume mounts to be read-only](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems).)
In addition, the NodeRestriction admission plugin now prevents kubelets from modifying taints on their Node API objects, making it easier to keep track of which nodes should be in use.
@@ -97,6 +98,7 @@ SIG CLI's main focus this release was on refactoring `kubectl` internals to impr
SIG Cluster Lifecycle focused on improving kubeadm’s user experience by including a set of new commands related to maintaining the kubeadm configuration file, the API version of which has now has been incremented to `v1alpha2`. These commands can handle the migration of the configuration to a newer version, printing the default configuration, and listing and pulling the required container images for bootstrapping a cluster.
Other notable changes include:
+
* CoreDNS replaces kube-dns as the default DNS provider
* Improved user experience for environments without a public internet connection and users using other CRI runtimes than Docker
* Support for structured configuration for the kubelet, which avoids the need to modify the systemd drop-in file
@@ -138,6 +140,7 @@ Sig Storage graduated two features that had been introduced in previous versions
The StorageProtection feature, which prevents deletion of PVCs while Pods are still using them and of PVs while still bound to a PVC, is now generally available, and volume resizing, which lets you increase size of a volume after a Pod restarts is now beta, which means it is on by default.
New alpha features include:
+
* Online volume resizing will increase the filesystem size of a resized volume without requiring a Pod restart.
* AWS EBS and GCE PD volumes support increased limits on the maximum number of attached volumes per node.
* Subpath volume directories can be created using DownwardAPI environment variables.
diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md
index 16aad74e2..120a309bb 100644
--- a/content/en/docs/reference/access-authn-authz/authentication.md
+++ b/content/en/docs/reference/access-authn-authz/authentication.md
@@ -505,7 +505,10 @@ It is designed for use in combination with an authenticating proxy, which sets t
* `--requestheader-username-headers` Required, case-insensitive. Header names to check, in order, for the user identity. The first header containing a value is used as the username.
* `--requestheader-group-headers` 1.6+. Optional, case-insensitive. "X-Remote-Group" is suggested. Header names to check, in order, for the user's groups. All values in all specified headers are used as group names.
-* `--requestheader-extra-headers-prefix` 1.6+. Optional, case-insensitive. "X-Remote-Extra-" is suggested. Header prefixes to look for to determine extra information about the user (typically used by the configured authorization plugin). Any headers beginning with any of the specified prefixes have the prefix removed, the remainder of the header name becomes the extra key, and the header value is the extra value.
+* `--requestheader-extra-headers-prefix` 1.6+. Optional, case-insensitive. "X-Remote-Extra-" is suggested. Header prefixes to look for to determine extra information about the user (typically used by the configured authorization plugin). Any headers beginning with any of the specified prefixes have the prefix removed. The remainder of the header name is lowercased and [percent-decoded](https://tools.ietf.org/html/rfc3986#section-2.1) and becomes the extra key, and the header value is the extra value.
+{{< note >}}
+**Note:** Prior to 1.11.2, the extra key could only contain characters which were [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6).
+{{< /note >}}
For example, with this configuration:
@@ -522,6 +525,7 @@ GET / HTTP/1.1
X-Remote-User: fido
X-Remote-Group: dogs
X-Remote-Group: dachshunds
+X-Remote-Extra-Acme.com%2Fproject: some-project
X-Remote-Extra-Scopes: openid
X-Remote-Extra-Scopes: profile
```
@@ -534,6 +538,8 @@ groups:
- dogs
- dachshunds
extra:
+ acme.com/project:
+ - some-project
scopes:
- openid
- profile
@@ -587,7 +593,11 @@ The following HTTP headers can be used to performing an impersonation request:
* `Impersonate-User`: The username to act as.
* `Impersonate-Group`: A group name to act as. Can be provided multiple times to set multiple groups. Optional. Requires "Impersonate-User"
-* `Impersonate-Extra-( extra name )`: A dynamic header used to associate extra fields with the user. Optional. Requires "Impersonate-User"
+* `Impersonate-Extra-( extra name )`: A dynamic header used to associate extra fields with the user. Optional. Requires "Impersonate-User". In order to be preserved consistently, `( extra name )` should be lower-case, and any characters which aren't [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6) MUST be utf8 and [percent-encoded](https://tools.ietf.org/html/rfc3986#section-2.1).
+
+{{< note >}}
+**Note:** Prior to 1.11.2, `( extra name )` could only contain characters which were [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6).
+{{< /note >}}
An example set of headers:
@@ -596,6 +606,7 @@ Impersonate-User: jane.doe@example.com
Impersonate-Group: developers
Impersonate-Group: admins
Impersonate-Extra-dn: cn=jane,ou=engineers,dc=example,dc=com
+Impersonate-Extra-acme.com%2Fproject: some-project
Impersonate-Extra-scopes: view
Impersonate-Extra-scopes: development
```
@@ -781,7 +792,7 @@ To use bearer token credentials, the plugin returns a token in the status of the
```
Alternatively, a PEM-encoded client certificate and key can be returned to use TLS client auth.
-If the plugin returns a different certificate and key on a subsequent call, `k8s.io/client-go`
+If the plugin returns a different certificate and key on a subsequent call, `k8s.io/client-go`
will close existing connections with the server to force a new TLS handshake.
If specified, `clientKeyData` and `clientCertificateData` must both must be present.
diff --git a/content/en/docs/reference/access-authn-authz/authorization.md b/content/en/docs/reference/access-authn-authz/authorization.md
index f352f4caf..09212d0a6 100644
--- a/content/en/docs/reference/access-authn-authz/authorization.md
+++ b/content/en/docs/reference/access-authn-authz/authorization.md
@@ -122,7 +122,7 @@ kind: SelfSubjectAccessReview
spec:
resourceAttributes:
group: apps
- name: deployments
+ resource: deployments
verb: create
namespace: dev
EOF
@@ -134,7 +134,7 @@ metadata:
spec:
resourceAttributes:
group: apps
- name: deployments
+ resource: deployments
namespace: dev
verb: create
status:
diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
index 127a8caaf..ee0669ac9 100644
--- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md
+++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
@@ -1,14 +1,16 @@
---
title: Feature Gates
weight: 10
-notitle: true
+title: Feature Gates
+content_template: templates/concept
---
-## Feature Gates
-
+{{% capture overview %}}
This page contains an overview of the various feature gates an administrator
can specify on different Kubernetes components.
+{{% /capture %}}
+{{% capture body %}}
## Overview
Feature gates are a set of key=value pairs that describe alpha or experimental
@@ -58,6 +60,7 @@ different Kubernetes components.
| `DynamicVolumeProvisioning` | `true` | Alpha | 1.3 | 1.7 |
| `DynamicVolumeProvisioning` | `true` | GA | 1.8 | |
| `EnableEquivalenceClassCache` | `false` | Alpha | 1.8 | |
+| `ExpandInUsePersistentVolumes` | `false` | Alpha | 1.11 | |
| `ExpandPersistentVolumes` | `false` | Alpha | 1.8 | 1.10 |
| `ExpandPersistentVolumes` | `true` | Beta | 1.11 | |
| `ExperimentalCriticalPodAnnotation` | `false` | Alpha | 1.5 | |
@@ -182,6 +185,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `DynamicProvisioningScheduling`: Extend the default scheduler to be aware of volume topology and handle PV provisioning.
- `DynamicVolumeProvisioning`(*deprecated*): Enable the [dynamic provisioning](/docs/concepts/storage/dynamic-provisioning/) of persistent volumes to Pods.
- `EnableEquivalenceClassCache`: Enable the scheduler to cache equivalence of nodes when scheduling Pods.
+- `ExpandInUsePersistentVolumes`: Enable expanding in-use PVCs. See [Resizing an in-use PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim).
- `ExpandPersistentVolumes`: Enable the expanding of persistent volumes. See [Expanding Persistent Volumes Claims](/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims).
- `ExperimentalCriticalPodAnnotation`: Enable annotating specific pods as *critical* so that their [scheduling is guaranteed](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/).
- `ExperimentalHostUserNamespaceDefaultingGate`: Enabling the defaulting user
@@ -246,3 +250,4 @@ Each feature gate is designed for enabling/disabling a specific feature:
PersistentVolumeClaim (PVC) binding aware of scheduling decisions. It also
enables the usage of [`local`](/docs/concepts/storage/volumes/#local) volume
type when used together with the `PersistentLocalVolumes` feature gate.
+{{% /capture %}}
diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md b/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md
index 08b9eef63..a2a84d9ff 100644
--- a/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md
+++ b/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md
@@ -4,16 +4,19 @@ reviewers:
- mikedanese
- jcbsmpsn
title: TLS bootstrapping
+content_template: templates/concept
---
-{{< toc >}}
-
-## Overview
+{{% capture overview %}}
This document describes how to set up TLS client certificate bootstrapping for kubelets.
Kubernetes 1.4 introduced an API for requesting certificates from a cluster-level Certificate Authority (CA). The original intent of this API is to enable provisioning of TLS client certificates for kubelets. The proposal can be found [here](https://github.com/kubernetes/kubernetes/pull/20439)
and progress on the feature is being tracked as [feature #43](https://github.com/kubernetes/features/issues/43).
+{{% /capture %}}
+
+{{% capture body %}}
+
## kube-apiserver configuration
The API server should be configured with an [authenticator](/docs/reference/access-authn-authz/authentication/) that can authenticate tokens as a user in the `system:bootstrappers` group.
@@ -24,16 +27,18 @@ controller. As this feature matures, you should ensure tokens are bound to a Rol
While any authentication strategy can be used for the kubelet's initial bootstrap credentials, the following two authenticators are recommended for ease of provisioning.
-1. [Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) - __alpha__
+1. [Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) - __beta__
2. [Token authentication file](#token-authentication-file)
-Using bootstrap tokens is currently __alpha__ and will simplify the management of bootstrap token management especially in a HA scenario.
+Using bootstrap tokens is currently __beta__ and will simplify the management of bootstrap token management especially in a HA scenario.
### Token authentication file
Tokens are arbitrary but should represent at least 128 bits of entropy derived from a secure random number
generator (such as /dev/urandom on most modern systems). There are multiple ways you can generate a token. For example:
-`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`
+```
+head -c 16 /dev/urandom | od -An -t x | tr -d ' '
+```
will generate tokens that look like `02b50b05283e98dd0fd71db496ef01e8`
@@ -194,11 +199,13 @@ kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --ku
When starting the kubelet, if the file specified by `--kubeconfig` does not exist, the bootstrap kubeconfig is used to request a client certificate from the API server. On approval of the certificate request and receipt back by the kubelet, a kubeconfig file referencing the generated key and obtained certificate is written to the path specified by `--kubeconfig`. The certificate and key file will be placed in the directory specified by `--cert-dir`.
+{{< note >}}
**Note:** The following flags are required to enable this bootstrapping when starting the kubelet:
```
--bootstrap-kubeconfig="/path/to/bootstrap/kubeconfig"
```
+{{< /note >}}
Additionally, in 1.7 the kubelet implements __alpha__ features for enabling rotation of both its client and/or serving certs.
These can be enabled through the respective `RotateKubeletClientCertificate` and `RotateKubeletServerCertificate` feature
@@ -220,3 +227,5 @@ approval controller, but for the alpha version of the API it can be done manuall
An administrator can list CSRs with `kubectl get csr` and describe one in detail with `kubectl describe csr `. Before the 1.6 release there were
[no direct approve/deny commands](https://github.com/kubernetes/kubernetes/issues/30163) so an approver had to update
the Status field directly ([rough how-to](https://github.com/gtank/csrctl)). Later versions of Kubernetes offer `kubectl certificate approve ` and `kubectl certificate deny ` commands.
+
+{{% /capture %}}
diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet.md b/content/en/docs/reference/command-line-tools-reference/kubelet.md
index 004ffdf4b..e31299d69 100644
--- a/content/en/docs/reference/command-line-tools-reference/kubelet.md
+++ b/content/en/docs/reference/command-line-tools-reference/kubelet.md
@@ -1116,6 +1116,13 @@ kubelet [flags]
File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir.
+
+
--tls-cipher-suites stringSlice
+
+
+
Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be use. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA
The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
+
The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
false
integer (int32)
diff --git a/content/en/docs/reference/glossary/approver.md b/content/en/docs/reference/glossary/approver.md
index 44a740474..32dcd8337 100755
--- a/content/en/docs/reference/glossary/approver.md
+++ b/content/en/docs/reference/glossary/approver.md
@@ -14,5 +14,5 @@ tags:
-While code review is focused on code quality and correctness, approval is focused on the holistic acceptance of a contribution. Holistic acceptance includes backwards/forwards compatibility, adhering to API and flag conventions, subtle performance and correctness issues, interactions with other parts of the system, and others. Approver status is scoped to a part of the codebase.
+While code review is focused on code quality and correctness, approval is focused on the holistic acceptance of a contribution. Holistic acceptance includes backwards/forwards compatibility, adhering to API and flag conventions, subtle performance and correctness issues, interactions with other parts of the system, and others. Approver status is scoped to a part of the codebase. Approvers were previously referred to as maintainers.
diff --git a/content/en/docs/reference/glossary/csi.md b/content/en/docs/reference/glossary/csi.md
new file mode 100644
index 000000000..29e5550cc
--- /dev/null
+++ b/content/en/docs/reference/glossary/csi.md
@@ -0,0 +1,21 @@
+---
+title: Container Storage Interface (CSI)
+id: csi
+date: 2018-06-25
+full_link: https://kubernetes.io/docs/concepts/storage/volumes/#csi
+short_description: >
+ The Container Storage Interface (CSI) defines a standard interface to expose storage systems to containers.
+
+
+aka:
+tags:
+- storage
+---
+ The Container Storage Interface (CSI) defines a standard interface to expose storage systems to containers.
+
+
+
+CSI allows vendors to create custom storage plugins for Kubernetes without adding them to the Kubernetes repository (out-of-tree plugins). To use a CSI driver from a storage provider, you must first [deploy it to your cluster](https://kubernetes-csi.github.io/docs/Setup.html). You will then be able to create a {{< glossary_tooltip text="Storage Class" term_id="storage-class" >}} that uses that CSI driver.
+
+* [CSI in the Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/volumes/#csi)
+* [List of available CSI drivers](https://kubernetes-csi.github.io/docs/Drivers.html)
diff --git a/content/en/docs/reference/glossary/flexvolume.md b/content/en/docs/reference/glossary/flexvolume.md
new file mode 100644
index 000000000..612c1abed
--- /dev/null
+++ b/content/en/docs/reference/glossary/flexvolume.md
@@ -0,0 +1,22 @@
+---
+title: Flexvolume
+id: flexvolume
+date: 2018-06-25
+full_link: https://kubernetes.io/docs/concepts/storage/volumes/#flexvolume
+short_description: >
+ Flexvolume is an interface for creating out-of-tree volume plugins. The {{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}} is a newer interface which addresses several problems with Flexvolumes.
+
+
+aka:
+tags:
+- storage
+---
+ Flexvolume is an interface for creating out-of-tree volume plugins. The {{< glossary_tooltip text="Container Storage Interface" term_id="csi" >}} is a newer interface which addresses several problems with Flexvolumes.
+
+
+
+Flexvolumes enable users to write their own drivers and add support for their volumes in Kubernetes. FlexVolume driver binaries and dependencies must be installed on host machines. This requires root access. The Storage SIG suggests implementing a {{< glossary_tooltip text="CSI" term_id="csi" >}} driver if possible since it addresses the limitations with Flexvolumes.
+
+* [Flexvolume in the Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/volumes/#flexvolume)
+* [More information on Flexvolumes](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md)
+* [Volume Plugin FAQ for Storage Vendors](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md)
diff --git a/content/en/docs/reference/glossary/maintainer.md b/content/en/docs/reference/glossary/maintainer.md
deleted file mode 100755
index 8ca64a7ef..000000000
--- a/content/en/docs/reference/glossary/maintainer.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title: Maintainer
-id: maintainer
-date: 2018-04-12
-full_link:
-short_description: >
- A highly experienced contributor, active in multiple areas of Kubernetes, who has cross-area ownership and write access to a project's GitHub repository.
-
-aka:
-tags:
-- community
----
- A highly experienced {{< glossary_tooltip text="contributor" term_id="contributor" >}}, active in multiple areas of Kubernetes, who has cross-area ownership and write access to a project's GitHub repository.
-
-
-
-Maintainers work holistically across the project to maintain its health and success and have made substantial contributions, both through code development and broader organizational efforts.
-
diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md
index dba8ea36a..c51c24af3 100644
--- a/content/en/docs/reference/kubectl/overview.md
+++ b/content/en/docs/reference/kubectl/overview.md
@@ -349,6 +349,87 @@ $ kubectl logs
$ kubectl logs -f
```
+## Examples: Creating and using plugins
+
+Use the following set of examples to help you familiarize yourself with writing and using `kubectl` plugins:
+
+```shell
+// create a simple plugin in any language and name the resulting executable file
+// so that it begins with the prefix "kubectl-"
+$ cat ./kubectl-hello
+#!/bin/bash
+
+# this plugin prints the words "hello world"
+echo "hello world"
+
+// with our plugin written, let's make it executable
+$ sudo chmod +x ./kubectl-hello
+
+// and move it to a location in our PATH
+$ sudo mv ./kubectl-hello /usr/local/bin
+
+// we have now created and "installed" a kubectl plugin.
+// we can begin using our plugin by invoking it from kubectl as if it were a regular command
+$ kubectl hello
+hello world
+
+// we can "uninstall" a plugin, by simply removing it from our PATH
+$ sudo rm /usr/local/bin/kubectl-hello
+```
+
+In order to view all of the plugins that are available to `kubectl`, we can use
+the `kubectl plugin list` subcommand:
+
+```shell
+$ kubectl plugin list
+The following kubectl-compatible plugins are available:
+
+/usr/local/bin/kubectl-hello
+/usr/local/bin/kubectl-foo
+/usr/local/bin/kubectl-bar
+
+// this command can also warn us about plugins that are
+// not executable, or that are overshadowed by other
+// plugins, for example
+$ sudo chmod -x /usr/local/bin/kubectl-foo
+$ kubectl plugin list
+The following kubectl-compatible plugins are available:
+
+/usr/local/bin/kubectl-hello
+/usr/local/bin/kubectl-foo
+ - warning: /usr/local/bin/kubectl-foo identified as a plugin, but it is not executable
+/usr/local/bin/kubectl-bar
+
+error: one plugin warning was found
+```
+
+We can think of plugins as a means to build more complex functionality on top
+of the existing kubectl commands:
+
+```shell
+$ cat ./kubectl-whoami
+#!/bin/bash
+
+# this plugin makes use of the `kubectl config` command in order to output
+# information about the current user, based on the currently selected context
+kubectl config view --template='{{ range .contexts }}{{ if eq .name "'$(kubectl config current-context)'" }}Current user: {{ .context.user }}{{ end }}{{ end }}'
+```
+
+Running the above plugin gives us an output containing the user for the currently selected
+context in our KUBECONFIG file:
+
+```shell
+// make the file executable
+$ sudo chmod +x ./kubectl-whoami
+
+// and move it into our PATH
+$ sudo mv ./kubectl-whoami /usr/local/bin
+
+$ kubectl whoami
+Current user: plugins-user
+```
+
+To find out more about plugins, take a look at the [example cli plugin](https://github.com/kubernetes/sample-cli-plugin).
## Next steps
diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md
index 7bcd1ac02..0c8979b53 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md
@@ -102,6 +102,9 @@ configuration file options. This file is passed in the `--config` option.
In Kubernetes 1.11 and later, the default configuration can be printed out using the
[kubeadm config print-default](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command.
+It is **recommended** that you migrate your old `v1alpha1` configuration to `v1alpha2` using
+the [kubeadm config migrate](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command,
+because `v1alpha1` will be removed in Kubernetes 1.12.
For more details on each field in the configuration you can navigate to our
[API reference pages.] (https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#MasterConfiguration)
@@ -259,26 +262,10 @@ unifiedControlPlaneImage: ""
For information about kube-proxy parameters in the MasterConfiguration see:
- [kube-proxy](https://godoc.org/k8s.io/kubernetes/pkg/proxy/apis/kubeproxyconfig/v1alpha1#KubeProxyConfiguration)
-### Passing custom arguments to control plane components {#custom-args}
+### Passing custom flags to control plane components {#control-plane-flags}
-If you would like to override or extend the behaviour of a control plane component, you can provide
-extra arguments to kubeadm. When the component is deployed, these additional arguments are added to
-the Pod command itself.
-
-For example, to add additional feature-gate arguments to the API server, your [configuration file](#config-file)
-will need to look like this:
-
-```
-apiVersion: kubeadm.k8s.io/v1alpha2
-kind: MasterConfiguration
-apiServerExtraArgs:
- feature-gates: APIResponseCompression=true
-```
-
-To customize the scheduler or controller-manager, use `schedulerExtraArgs` and `controllerManagerExtraArgs` respectively.
-
-For more information on parameters for the controller-manager and scheduler, see:
-- [high-availability](/docs/setup/independent/high-availability)
+For information about passing flags to control plane components see:
+- [control-plane-flags](/docs/setup/independent/control-plane-flags/)
### Using custom images {#custom-images}
@@ -294,6 +281,8 @@ Allowed customization are:
* To provide a `unifiedControlPlaneImage` to be used instead of different images for control plane components.
* To provide a specific `etcd.image` to be used instead of the image available at`k8s.gcr.io`.
+Please note that the configuration field `kubernetesVersion` or the command line flag
+`--kubernetes-version` affect the version of the images.
### Using custom certificates {#custom-certificates}
diff --git a/content/en/docs/reference/using-api/api-overview.md b/content/en/docs/reference/using-api/api-overview.md
index e63848523..2d38030e3 100644
--- a/content/en/docs/reference/using-api/api-overview.md
+++ b/content/en/docs/reference/using-api/api-overview.md
@@ -33,6 +33,7 @@ multiple API versions, each at a different API path. For example: `/api/v1` or
`/apis/extensions/v1beta1`.
The version is set at the API level rather than at the resource or field level to:
+
- Ensure that the API presents a clear and consistent view of system resources and behavior.
- Enable control access to end-of-life and/or experimental APIs.
diff --git a/content/en/docs/reference/using-api/client-libraries.md b/content/en/docs/reference/using-api/client-libraries.md
index 1461c5eeb..ce7f2acd6 100644
--- a/content/en/docs/reference/using-api/client-libraries.md
+++ b/content/en/docs/reference/using-api/client-libraries.md
@@ -60,6 +60,7 @@ their authors, not the Kubernetes team.
| Python | [github.com/mnubo/kubernetes-py](https://github.com/mnubo/kubernetes-py) |
| Ruby | [github.com/Ch00k/kuber](https://github.com/Ch00k/kuber) |
| Ruby | [github.com/abonas/kubeclient](https://github.com/abonas/kubeclient) |
+| Ruby | [github.com/kontena/k8s-client](https://github.com/kontena/k8s-client) |
| Scala | [github.com/doriordan/skuber](https://github.com/doriordan/skuber) |
| dotNet | [github.com/tonnyeremin/kubernetes_gen](https://github.com/tonnyeremin/kubernetes_gen) |
| DotNet (RestSharp) | [github.com/masroorhasan/Kubernetes.DotNet](https://github.com/masroorhasan/Kubernetes.DotNet) |
diff --git a/content/en/docs/setup/certificates.md b/content/en/docs/setup/certificates.md
new file mode 100644
index 000000000..39db59c3e
--- /dev/null
+++ b/content/en/docs/setup/certificates.md
@@ -0,0 +1,139 @@
+---
+title: PKI Certificates and Requirements
+reviewers:
+- sig-cluster-lifecycle
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+Kubernetes requires PKI certificates for authentication over TLS.
+If you install Kubernetes with [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), the certificates that your cluster requires are automatically generated.
+You can also generate your own certificates -- for example, to keep your private keys more secure by not storing them on the API server.
+This page explains the certificates that your cluster requires.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## How certificates are used by your cluster
+
+Kubernetes requires PKI for the following operations:
+
+* Client certificates for the kubelet to authenticate to the API server
+* Server certificate for the API server endpoint
+* Client certificates for administrators of the cluster to authenticate to the API server
+* Client certificates for the API server to talk to the kubelets
+* Client certificate for the API server to talk to etcd
+* Client certificate/kubeconfig for the controller manager to talk to the API server
+* Client certificate/kubeconfig for the scheduler to talk to the API server.
+* Client and server certificates for the [front-proxy][proxy]
+
+{{< note >}}
+**Note:** `front-proxy` certificates are required only if you run kube-proxy to support [an extension API server](/docs/tasks/access-kubernetes-api/setup-extension-api-server/).
+{{< /note >}}
+
+etcd also implements mutual TLS to authenticate clients and peers.
+
+## Where certificates are stored
+
+If you install Kubernetes with kubeadm, certificates are stored in `/etc/kubernetes/pki`. All paths in this documentation are relative to that directory.
+
+## Configure certificates manually
+
+If you don't want kubeadm to generate the required certificates, you can create them in either of the following ways.
+
+### Single root CA
+
+You can create a single root CA, controlled by an adminstrator. This root CA can then create multiple intermediate CAs, and delegate all further creation to Kubernetes itself.
+
+Required CAs:
+
+| path | Default CN | description |
+|------------------------|---------------------------|----------------------------------|
+| ca.crt,key | kubernetes-ca | Kubernetes general CA |
+| etcd/ca.crt,key | etcd-ca | For all etcd-related functions |
+| front-proxy-ca.crt,key | kubernetes-front-proxy-ca | For the [front-end proxy][proxy] |
+
+### All certificates
+
+If you don't wish to copy these private keys to your API servers, you can generate all certificates yourself.
+
+Required certificates:
+
+| Default CN | Parent CA | O (in Subject) | kind | hosts (SAN) |
+|-------------------------------|---------------------------|----------------|----------------------------------------|---------------------------------------------|
+| kube-etcd | etcd-ca | | server, client [1][etcdbug] | `localhost`, `127.0.0.1` |
+| kube-etcd-peer | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` |
+| kube-etcd-healthcheck-client | etcd-ca | | client | |
+| kube-apiserver-etcd-client | etcd-ca | system:masters | client | |
+| kube-apiserver | kubernetes-ca | | server | ``, ``, ``, `[1]` |
+| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
+| front-proxy-client | kubernetes-front-proxy-ca | | client | |
+
+[1]: `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`, `kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`
+
+where `kind` maps to one or more of the [x509 key usage][usage] types:
+
+| kind | Key usage |
+|--------|---------------------------------------------------------------------------------|
+| server | digital signature, key encipherment, server auth |
+| client | digital signature, key encipherment, client auth |
+
+### Certificate paths
+
+Certificates should be placed in a recommended path (as used by [kubeadm][kubeadm]). Paths should be specified using the given argument regardless of location.
+
+| Default CN | recommend key path | recommended cert path | command | key argument | cert argument |
+|------------------------------|------------------------------|-----------------------------|----------------|------------------------------|-------------------------------------------|
+| etcd-ca | | etcd/ca.crt | kube-apiserver | | --etcd-cafile |
+| etcd-client | apiserver-etcd-client.crt | apiserver-etcd-client.crt | kube-apiserver | --etcd-certfile | --etcd-keyfile |
+| kubernetes-ca | | ca.crt | kube-apiserver | --client-ca-file | |
+| kube-apiserver | apiserver.crt | apiserver.key | kube-apiserver | --tls-cert-file | --tls-private-key |
+| apiserver-kubelet-client | apiserver-kubelet-client.crt | | kube-apiserver | --kubelet-client-certificate | |
+| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-cert-file | --proxy-client-key-file |
+| | | | | | |
+| etcd-ca | | etcd/ca.crt | etcd | | --trusted-ca-file, --peer-trusted-ca-file |
+| kube-etcd | | etcd/server.crt | etcd | | --cert-file |
+| kube-etcd-peer | etcd/peer.key | etcd/peer.crt | etcd | --peer-key-file | --peer-cert-file |
+| etcd-ca | | etcd/ca.crt | etcdctl[2] | | --cacert |
+| kube-etcd-healthcheck-client | etcd/healthcheck-client.key | etcd/healthcheck-client.crt | etcdctl[2] | --key | --cert |
+
+[2]: For a liveness probe, if self-hosted
+
+## Configure certificates for user accounts
+
+You must manually configure these administrator account and service accounts:
+
+| filename | credential name | Default CN | O (in Subject) |
+|-------------------------|----------------------------|--------------------------------|----------------|
+| admin.conf | default-admin | kubernetes-admin | system:masters |
+| kubelet.conf | default-auth | system:node:`` | system:nodes |
+| controller-manager.conf | default-controller-manager | system:kube-controller-manager | |
+| scheduler.conf | default-manager | system:kube-scheduler | |
+
+1. For each config, generate an x509 cert/key pair with the given CN and O.
+
+1. Run `kubectl` as follows for each config:
+
+```shell
+KUBECONFIG= kubectl config set-cluster default-cluster --server=https://:6443 --certificate-authority --embed-certs
+KUBECONFIG= kubectl config set-credentials --client-key .pem --client-certificate .pem --embed-certs
+KUBECONFIG= kubectl config set-context default-system --cluster default-cluster --user
+KUBECONFIG= kubectl config use-context default-system
+```
+
+These files are used as follows:
+
+| filename | command | comment |
+|-------------------------|-------------------------|-----------------------------------------------------------------------|
+| admin.conf | kubectl | Configures administrator user for the cluster |
+| kubelet.conf | kubelet | One required for each node in the cluster. |
+| controller-manager.conf | kube-controller-manager | Must be added to manifest in `manifests/kube-controller-manager.yaml` |
+| scheduler.conf | kube-scheduler | Must be added to manifest in `manifests/kube-scheduler.yaml` |
+
+[usage]: https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage
+[kubeadm]: /docs/reference/setup-tools/kubeadm/kubeadm/
+[proxy]: /docs/tasks/access-kubernetes-api/configure-aggregation-layer/
+
+{{% /capture %}}
\ No newline at end of file
diff --git a/content/en/docs/setup/custom-cloud/kops.md b/content/en/docs/setup/custom-cloud/kops.md
index 324ad80dc..0ce7f547f 100644
--- a/content/en/docs/setup/custom-cloud/kops.md
+++ b/content/en/docs/setup/custom-cloud/kops.md
@@ -12,9 +12,9 @@ kops is an opinionated provisioning system:
* Fully automated installation
* Uses DNS to identify clusters
* Self-healing: everything runs in Auto-Scaling Groups
-* Limited OS support (Debian preferred, Ubuntu 16.04 supported, early support for CentOS & RHEL)
-* High-Availability support
-* Can directly provision, or generate terraform manifests
+* Multiple OS support (Debian, Ubuntu 16.04 supported, CentOS & RHEL, Amazon Linux and CoreOS) - see the [images.md](https://github.com/kubernetes/kops/blob/master/docs/images.md)
+* High-Availability support - see the [high_availability.md](https://github.com/kubernetes/kops/blob/master/docs/high_availability.md)
+* Can directly provision, or generate terraform manifests - see the [terraform.md](https://github.com/kubernetes/kops/blob/master/docs/terraform.md)
If your opinions differ from these you may prefer to build your own cluster using [kubeadm](/docs/admin/kubeadm/) as
a building block. kops builds on the kubeadm work.
@@ -34,7 +34,7 @@ Download kops from the [releases page](https://github.com/kubernetes/kops/releas
On macOS:
```
-curl -OL https://github.com/kubernetes/kops/releases/download/1.8.0/kops-darwin-amd64
+curl -OL https://github.com/kubernetes/kops/releases/download/1.10.0/kops-darwin-amd64
chmod +x kops-darwin-amd64
mv kops-darwin-amd64 /usr/local/bin/kops
# you can also install using Homebrew
@@ -44,7 +44,7 @@ brew update && brew install kops
On Linux:
```
-wget https://github.com/kubernetes/kops/releases/download/1.8.0/kops-linux-amd64
+wget https://github.com/kubernetes/kops/releases/download/1.10.0/kops-linux-amd64
chmod +x kops-linux-amd64
mv kops-linux-amd64 /usr/local/bin/kops
```
@@ -153,6 +153,7 @@ See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to expl
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/).
* Learn about `kops` [advanced usage](https://github.com/kubernetes/kops)
+* See the `kops` [docs](https://github.com/kubernetes/kops) section for tutorials, best practices and advanced configuration options.
## Cleanup
@@ -160,6 +161,6 @@ See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to expl
## Feedback
-* Slack Channel: [#sig-aws](https://kubernetes.slack.com/messages/sig-aws/) has a lot of kops users
+* Slack Channel: [#kops-users](https://kubernetes.slack.com/messages/kops-users/)
* [GitHub Issues](https://github.com/kubernetes/kops/issues)
diff --git a/content/en/docs/setup/independent/kubelet-integration.md b/content/en/docs/setup/independent/kubelet-integration.md
new file mode 100644
index 000000000..80836bdd0
--- /dev/null
+++ b/content/en/docs/setup/independent/kubelet-integration.md
@@ -0,0 +1,200 @@
+---
+reviewers:
+- sig-cluster-lifecycle
+title: Configuring each kubelet in your cluster using kubeadm
+content_template: templates/concept
+weight: 40
+---
+
+{{% capture overview %}}
+
+{{< feature-state for_k8s_version="1.11" state="stable" >}}
+
+The lifecycle of the kubeadm CLI tool is decoupled from the
+[Kubernetes Node Agent](/docs/reference/command-line-tools-reference/kubelet), which is a daemon that runs
+on each Kubernetes master or Node. The kubeadm CLI tool is executed by the user when Kubernetes is
+initialized or upgraded, whereas the kubelet is always running in the background.
+
+Since the kubelet is a daemon, it needs to be maintained by some kind of a init
+system or service manager. When the kubelet is installed using DEBs or RPMs,
+systemd is configured to manage the kubelet. You can use a different service
+manager instead, but you need to configure it manually.
+
+Some kubelet configuration details need to be the same across all kubelets involved in the cluster, while
+other configuration aspects need to be set on a per-kubelet basis, to accommodate the different
+characteristics of a given machine, such as OS, storage, and networking. You can manage the configuration
+of your kubelets manually, but [kubeadm now provides a `MasterConfig` API type for managing your
+kubelet configurations centrally](#configure-kubelets-using-kubeadm).
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Kubelet configuration patterns
+
+The following sections describe patterns to kubelet configuration that are simplified by
+using kubeadm, rather than managing the kubelet configuration for each Node manually.
+
+### Propogating cluster-level configuration to each kubelet
+
+You can provide the kubelet with default values to be used by `kubelet init` and `kubelet join`
+commands. Interesting examples include using a different CRI runtime or setting the default subnet
+used by services.
+
+If you want your services to use the subnet `10.96.0.0/12` as the default for services, you can pass
+the `--service-cidr` parameter to kubeadm:
+
+```bash
+kubeadm init --service-cidr 10.96.0.0/12
+```
+
+Virtual IPs for services are now allocated from this subnet. You also need to set the DNS address used
+by the kubelet, using the `--cluster-dns` flag. This setting needs to be the same for every kubelet
+on every manager and Node in the cluster. The kubelet provides a versioned, structured API object
+that can configure most parameters in the kubelet and push out this configuration to each running
+kubelet in the cluster. This object is called **the kubelet's ComponentConfig**.
+The ComponentConfig allows the user to specify flags such as the cluster DNS IP addresses expressed as
+a list of values to a camelCased key, illustrated by the following example:
+
+```yaml
+apiVersion: kubelet.config.k8s.io/v1beta1
+kind: KubeletConfiguration
+clusterDNS:
+- 10.96.0.10
+```
+
+See the
+[API reference for the
+kubelet ComponentConfig](https://godoc.org/k8s.io/kubernetes/pkg/kubelet/apis/kubeletconfig#KubeletConfiguration)
+for more information.
+
+### Providing instance-specific configuration details
+
+Some hosts require specific kubelet configurations, due to differences in hardware, operating system,
+networking, or other host-specific parameters. The following list provides a few examples.
+
+- The path to the DNS resolution file, as specified by the `--resolve-conf` kubelet
+ configuration flag, may differ among operating systems, or depending on whether you are using
+ `systemd-resolved`. If this path is wrong, DNS resolution will fail on the Node whose kubelet
+ is configured incorrectly.
+
+- The Node API object `.metadata.name` is set to the machine's hostname by default,
+ unless you are using a cloud provider. You can use the `--hostname-override` flag to override the
+ default behavior if you need to specify a Node name different from the machine's hostname.
+
+- Currently, the kubelet cannot automatically detects the cgroup driver used by the CRI runtime,
+ but the value of `--cgroup-driver` must match the cgroup driver used by the CRI runtime to ensure
+ the health of the kubelet.
+
+- Depending on the CRI runtime your cluster uses, you may need to specify different flags to the kubelet.
+ For instance, when using Docker, you need to specify flags such as `--network-plugin=cni`, but if you
+ are using an external runtime, you need to specify `--container-runtime=remote` and specify the CRI
+ endpoint using the `--container-runtime-path-endpoint=`.
+
+You can specify these flags by configuring an individual kubelet's configuration in your service manager,
+such as systemd.
+
+## Configure kubelets using kubeadm
+
+The kubeadm config API type `MasterConfiguration` embeds the kubelet's ComponentConfig under
+the `.kubeletConfiguration.baseConfig` key. Any user writing a `MasterConfiguration`
+file can use this configuration key to also set the base-level configuration for all kubelets
+in the cluster.
+
+### Workflow when using `kubeadm init`
+
+When you call `kubeadm init`, the `.kubeletConfiguration.baseConfig` structure is marshalled to disk
+at `/var/lib/kubelet/config.yaml`, and also uploaded to a ConfigMap in the cluster. The ConfigMap
+is named `kubelet-config-1.X`, where `.X` is the minor version of the Kubernetes version you are
+initializing. A kubelet configuration file is also written to `/etc/kubernetes/kubelet.conf` with the
+baseline cluster-wide configuration for all kubelets in the cluster. This configuration file
+points to the client certificates that allow the kubelet to communicate with the API server. This
+addresses the need to
+[propogate cluster-level configuration to each kubelet](#propagating-cluster-level-configuration-to-each-kubelet).
+
+To address the second pattern of
+[providing instance-specific configuration details](#providing-instance-specific-configuration-details),
+kubeadm writes an environment file to `/var/lib/kubelet/kubeadm-flags.env`, which contains a list of
+flags to pass to the kubelet when it starts. The flags are presented in the file like this:
+
+```bash
+KUBELET_KUBEADM_ARGS="--flag1=value1 --flag2=value2 ..."
+```
+
+In addition to the flags used when starting the kubelet, the file also contains dynamic
+parameters such as the cgroup driver and whether to use a different CRI runtime socket
+(`--cri-socket`).
+
+After marshalling these two files to disk, kubeadm attempts to run the following two
+commands, if you are using systemd:
+
+```bash
+systemctl daemon-reload && systemctl restart kubelet
+```
+
+If the reload and restart are successful, the normal `kubeadm init` workflow continues.
+
+### Workflow when using `kubeadm join`
+
+When you run `kubeadm join`, kubeadm uses the Bootstrap Token credential perform
+a TLS bootstrap, which fetches the credential needed to download the
+`kubelet-config-1.X` ConfigMap and writes it to `/var/lib/kubelet/config.yaml`. The dynamic
+environment file is generated in exactly the same way as `kubeadm init`.
+
+Next, `kubeadm` runs the following two commands to load the new configuration into the kubelet:
+
+```bash
+systemctl daemon-reload && systemctl restart kubelet
+```
+
+After the kubelet loads the new configuration, kubeadm writes the
+`/etc/kubernetes/bootstrap-kubelet.conf` KubeConfig file, which contains a CA certificate and Bootstrap
+Token. These are used by the kubelet to perform the TLS Bootstrap and obtain a unique
+credential, which is stored in `/etc/kubernetes/kubelet.conf`. When this file is written, the kubelet
+has finished performing the TLS Bootstrap.
+
+## The kubelet drop-in file for systemd
+
+The configuration file installed by the kubeadm DEB or RPM package is written to
+`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` and is used by systemd.
+
+```none
+[Service]
+Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
+--kubeconfig=/etc/kubernetes/kubelet.conf"
+Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
+# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating
+the KUBELET_KUBEADM_ARGS variable dynamically
+EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
+# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably,
+#the user should use the .NodeRegistration.KubeletExtraArgs object in the configuration files instead.
+# KUBELET_EXTRA_ARGS should be sourced from this file.
+EnvironmentFile=-/etc/default/kubelet
+ExecStart=
+ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
+```
+
+This file specifies the default locations for all of the files managed by kubeadm for the kubelet.
+
+- The KubeConfig file to use for the TLS Bootstrap is `/etc/kubernetes/bootstrap-kubelet.conf`,
+ but it is only used if `/etc/kubernetes/kubelet.conf` does not exist.
+- The KubeConfig file with the unique kubelet identity is `/etc/kubernetes/kubelet.conf`.
+- The file containing the kubelet's ComponentConfig is `/var/lib/kubelet/config.yaml`.
+- The dynamic environment file that contains `KUBELET_KUBEADM_ARGS` is sourced from `/var/lib/kubelet/kubeadm-flags.env`.
+- The file that can contain user-specified flag overrides with `KUBELET_EXTRA_ARGS` is sourced from
+ `/etc/default/kubelet` (for DEBs), or `/etc/systconfig/kubelet` (for RPMs). `KUBELET_EXTRA_ARGS`
+ is last in the flag chain and has the highest priority in the event of conflicting settings.
+
+## Kubernetes binaries and package contents
+
+The DEB and RPM packages shipped with the Kubernetes releases are:
+
+| Package name | Description |
+|--------------|-------------|
+| `kubeadm` | Installs the `/usr/bin/kubeadm` CLI tool and [The kubelet drop-in file(#the-kubelet-drop-in-file-for-systemd) for the kubelet. |
+| `kubelet` | Installs the `/usr/bin/kubelet` binary. |
+| `kubectl` | Installs the `/usr/bin/kubectl` binary. |
+| `kubernetes-cni` | Installs the official CNI binaries into the `/opt/cni/bin` directory. |
+| `cri-tools` | Installs the `/usr/bin/crictl` binary from [https://github.com/kubernetes-incubator/cri-tools](https://github.com/kubernetes-incubator/cri-tools). |
+
+{{% /capture %}}
diff --git a/content/en/docs/setup/independent/setup-ha-etcd-with-kubeadm.md b/content/en/docs/setup/independent/setup-ha-etcd-with-kubeadm.md
index 65a289723..286a6f121 100644
--- a/content/en/docs/setup/independent/setup-ha-etcd-with-kubeadm.md
+++ b/content/en/docs/setup/independent/setup-ha-etcd-with-kubeadm.md
@@ -54,7 +54,7 @@ this example.
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
- ExecStart=/usr/bin/kubelet --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
+ ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
Restart=always
EOF
@@ -86,7 +86,7 @@ this example.
apiVersion: "kubeadm.k8s.io/v1alpha2"
kind: MasterConfiguration
etcd:
- localEtcd:
+ local:
serverCertSANs:
- "${HOST}"
peerCertSANs:
diff --git a/content/en/docs/setup/independent/troubleshooting-kubeadm.md b/content/en/docs/setup/independent/troubleshooting-kubeadm.md
index 63effc467..4ba8688e6 100644
--- a/content/en/docs/setup/independent/troubleshooting-kubeadm.md
+++ b/content/en/docs/setup/independent/troubleshooting-kubeadm.md
@@ -123,7 +123,7 @@ Calico, Canal, and Flannel CNI providers are verified to support HostPort.
For more information, see the [CNI portmap documentation](https://github.com/containernetworking/plugins/blob/master/plugins/meta/portmap/README.md).
If your network provider does not support the portmap CNI plugin, you may need to use the [NodePort feature of
-services](/docs/concepts/services-networking/service/#type-nodeport) or use `HostNetwork=true`.
+services](/docs/concepts/services-networking/service/#nodeport) or use `HostNetwork=true`.
## Pods are not accessible via their Service IP
diff --git a/content/en/docs/setup/minikube.md b/content/en/docs/setup/minikube.md
index 8ab14470b..7835ed8b1 100644
--- a/content/en/docs/setup/minikube.md
+++ b/content/en/docs/setup/minikube.md
@@ -1,7 +1,7 @@
---
reviewers:
- dlorenc
-- r2d4
+- balopat
- aaron-prindle
title: Running Kubernetes Locally via Minikube
---
diff --git a/content/en/docs/setup/multiple-zones.md b/content/en/docs/setup/multiple-zones.md
index bf5bcb195..fb6c313fc 100644
--- a/content/en/docs/setup/multiple-zones.md
+++ b/content/en/docs/setup/multiple-zones.md
@@ -122,11 +122,11 @@ and `failure-domain.beta.kubernetes.io/zone` for the zone:
> kubectl get nodes --show-labels
-NAME STATUS AGE VERSION LABELS
-kubernetes-master Ready,SchedulingDisabled 6m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
-kubernetes-minion-87j9 Ready 6m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
-kubernetes-minion-9vlv Ready 6m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
-kubernetes-minion-a12q Ready 6m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
+NAME STATUS ROLES AGE VERSION LABELS
+kubernetes-master Ready,SchedulingDisabled 6m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
+kubernetes-minion-87j9 Ready 6m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
+kubernetes-minion-9vlv Ready 6m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
+kubernetes-minion-a12q Ready 6m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
```
### Add more nodes in a second zone
@@ -157,14 +157,14 @@ in us-central1-b:
```shell
> kubectl get nodes --show-labels
-NAME STATUS AGE VERSION LABELS
-kubernetes-master Ready,SchedulingDisabled 16m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
-kubernetes-minion-281d Ready 2m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
-kubernetes-minion-87j9 Ready 16m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
-kubernetes-minion-9vlv Ready 16m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
-kubernetes-minion-a12q Ready 17m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
-kubernetes-minion-pp2f Ready 2m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-pp2f
-kubernetes-minion-wf8i Ready 2m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-wf8i
+NAME STATUS ROLES AGE VERSION LABELS
+kubernetes-master Ready,SchedulingDisabled 16m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master
+kubernetes-minion-281d Ready 2m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
+kubernetes-minion-87j9 Ready 16m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9
+kubernetes-minion-9vlv Ready 16m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
+kubernetes-minion-a12q Ready 17m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-a12q
+kubernetes-minion-pp2f Ready 2m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-pp2f
+kubernetes-minion-wf8i Ready 2m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-wf8i
```
### Volume affinity
@@ -284,10 +284,10 @@ Node: kubernetes-minion-281d/10.240.0.8
Node: kubernetes-minion-olsh/10.240.0.11
> kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels
-NAME STATUS AGE VERSION LABELS
-kubernetes-minion-9vlv Ready 34m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
-kubernetes-minion-281d Ready 20m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
-kubernetes-minion-olsh Ready 3m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kubernetes.io/hostname=kubernetes-minion-olsh
+NAME STATUS ROLES AGE VERSION LABELS
+kubernetes-minion-9vlv Ready 34m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv
+kubernetes-minion-281d Ready 20m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d
+kubernetes-minion-olsh Ready 3m v1.11.1 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-f,kubernetes.io/hostname=kubernetes-minion-olsh
```
diff --git a/content/en/docs/setup/on-premises-vm/ovirt.md b/content/en/docs/setup/on-premises-vm/ovirt.md
index a461de95e..0c030f45c 100644
--- a/content/en/docs/setup/on-premises-vm/ovirt.md
+++ b/content/en/docs/setup/on-premises-vm/ovirt.md
@@ -23,10 +23,10 @@ It is mandatory to [install the ovirt-guest-agent] in the guests for the VM ip a
Once the Kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.
-[import]: http://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html
+[import]: https://ovedou.blogspot.it/2014/03/importing-glance-images-as-ovirt.html
[install]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#create-virtual-machines
[generate a template]: https://www.ovirt.org/documentation/quickstart/quickstart-guide/#using-templates
-[install the ovirt-guest-agent]: http://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-fedora/
+[install the ovirt-guest-agent]: https://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-fedora/
## Using the oVirt Cloud Provider
diff --git a/content/en/docs/setup/pick-right-solution.md b/content/en/docs/setup/pick-right-solution.md
index 2498044a5..331202dbf 100644
--- a/content/en/docs/setup/pick-right-solution.md
+++ b/content/en/docs/setup/pick-right-solution.md
@@ -73,6 +73,8 @@ a Kubernetes cluster from scratch.
* [Kublr](https://kublr.com) offers enterprise-grade secure, scalable, highly reliable Kubernetes clusters on AWS, Azure, GCP, and on-premise. It includes out-of-the-box backup and disaster recovery, multi-cluster centralized logging and monitoring, and built-in alerting.
+* [APPUiO](https://appuio.ch) runs an OpenShift public cloud platform, supporting any Kubernetes workload. Additionally APPUiO offers Private Managed OpenShift Clusters, running on any public or private cloud.
+
# Turnkey Cloud Solutions
These solutions allow you to create Kubernetes clusters on a range of Cloud IaaS providers with only a
@@ -93,7 +95,9 @@ few commands. These solutions are actively developed and have active community s
* [Gardener](https://gardener.cloud/)
* [Kontena Pharos](https://kontena.io/pharos/)
* [Kublr](https://kublr.com/)
+* [Agile Stacks](https://www.agilestacks.com/products/kubernetes)
* [Alibaba Cloud](/docs/setup/turnkey/alibaba-cloud/)
+* [APPUiO](https://appuio.ch)
## On-Premises turnkey cloud solutions
These solutions allow you to create Kubernetes clusters on your internal, secure, cloud network with only a
@@ -106,6 +110,8 @@ few commands.
* [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/)
* [Kontena Pharos](https://kontena.io/pharos/)
* [Kublr](https://kublr.com/)
+* [Agile Stacks](https://www.agilestacks.com/products/kubernetes)
+* [APPUiO](https://appuio.ch)
## Custom Solutions
@@ -208,6 +214,8 @@ any | any | any | any | [docs](http://docs.
any | RKE | multi-support | flannel or canal | [docs](https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/) | [Commercial](https://rancher.com/what-is-rancher/overview/) and [Community](https://github.com/rancher/rancher)
any | [Gardener Cluster-Operator](https://kubernetes.io/blog/2018/05/17/gardener/) | multi-support | multi-support | [docs](https://gardener.cloud) | [Project/Community](https://github.com/gardener) and [Commercial]( https://cloudplatform.sap.com/)
Alibaba Cloud Container Service For Kubernetes | ROS | CentOS | flannel/Terway | [docs](https://www.aliyun.com/product/containerservice) | Commercial
+Agile Stacks | Terraform | CoreOS | multi-support | [docs](https://www.agilestacks.com/products/kubernetes) | Commercial
+
{{< note >}}
**Note:** The above table is ordered by version test/used in nodes, followed by support level.
diff --git a/content/en/docs/setup/turnkey/gce.md b/content/en/docs/setup/turnkey/gce.md
index 3e51581a6..6c88cad87 100644
--- a/content/en/docs/setup/turnkey/gce.md
+++ b/content/en/docs/setup/turnkey/gce.md
@@ -50,7 +50,7 @@ wget -q -O - https://get.k8s.io | bash
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
-By default, some containers will already be running on your cluster. Containers like `fluentd` provide [logging](/docs/concepts/cluster-administration/logging/), while `heapster` provides [monitoring](http://releases.k8s.io/master/cluster/addons/cluster-monitoring/README.md) services.
+By default, some containers will already be running on your cluster. Containers like `fluentd` provide [logging](/docs/concepts/cluster-administration/logging/), while `heapster` provides [monitoring](https://releases.k8s.io/master/cluster/addons/cluster-monitoring/README.md) services.
The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
diff --git a/content/en/docs/tasks/access-application-cluster/configure-dns-cluster.md b/content/en/docs/tasks/access-application-cluster/configure-dns-cluster.md
index cde20991c..55a8c29a2 100644
--- a/content/en/docs/tasks/access-application-cluster/configure-dns-cluster.md
+++ b/content/en/docs/tasks/access-application-cluster/configure-dns-cluster.md
@@ -8,6 +8,6 @@ content_template: templates/concept
Kubernetes offers a DNS cluster addon, which most of the supported environments enable by default.
{{% /capture %}}
{{% capture body %}}
-For more information on how to configure DNS for a Kubernetes cluster, see the [Kubernetes DNS sample plugin.](https://github.com/kubernetes/kubernetes/tree/release-1.5/examples/cluster-dns)
+For more information on how to configure DNS for a Kubernetes cluster, see the [Kubernetes DNS sample plugin.](https://github.com/kubernetes/examples/tree/master/staging/cluster-dns)
{{% /capture %}}
diff --git a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md
index 6269dd520..f91ce1b72 100644
--- a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md
+++ b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md
@@ -31,7 +31,7 @@ frontend and backend are connected using a Kubernetes Service object.
[Services with external load balancers](/docs/tasks/access-application-cluster/create-external-load-balancer/), which
require a supported environment. If your environment does not
support this, you can use a Service of type
- [NodePort](/docs/concepts/services-networking/service/#type-nodeport) instead.
+ [NodePort](/docs/concepts/services-networking/service/#nodeport) instead.
{{% /capture %}}
@@ -144,8 +144,8 @@ kubectl create -f https://k8s.io/examples/service/access/frontend.yaml
The output verifies that both resources were created:
```
-deployment "frontend" created
-service "frontend" created
+deployment.apps/frontend created
+service/frontend created
```
**Note**: The nginx configuration is baked into the
@@ -167,16 +167,16 @@ This displays the configuration for the `frontend` Service and watches for
changes. Initially, the external IP is listed as ``:
```
-NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-frontend 10.51.252.116 80/TCP 10s
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+frontend ClusterIP 10.51.252.116 80/TCP 10s
```
As soon as an external IP is provisioned, however, the configuration updates
to include the new IP under the `EXTERNAL-IP` heading:
```
-NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-frontend 10.51.252.116 XXX.XXX.XXX.XXX 80/TCP 1m
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+frontend ClusterIP 10.51.252.116 XXX.XXX.XXX.XXX 80/TCP 1m
```
That IP can now be used to interact with the `frontend` service from outside the
diff --git a/content/en/docs/tasks/access-application-cluster/create-external-load-balancer.md b/content/en/docs/tasks/access-application-cluster/create-external-load-balancer.md
index 4d631b302..ae747d071 100644
--- a/content/en/docs/tasks/access-application-cluster/create-external-load-balancer.md
+++ b/content/en/docs/tasks/access-application-cluster/create-external-load-balancer.md
@@ -33,7 +33,7 @@ documentation.
## Configuration file
To create an external load balancer, add the following line to your
-[service configuration file](/docs/concepts/services-networking/service/#type-loadbalancer):
+[service configuration file](/docs/concepts/services-networking/service/#loadbalancer):
```json
"type": "LoadBalancer"
diff --git a/content/en/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md b/content/en/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md
index 47bba6581..d45cf2493 100644
--- a/content/en/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md
+++ b/content/en/docs/tasks/access-application-cluster/load-balance-access-application-cluster.md
@@ -75,8 +75,8 @@ load-balanced access to an application running in a cluster.
external IP address remains in the pending state.
{{< /note >}}
- NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- example-service 10.0.0.160 8080/TCP 40s
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ example-service ClusterIP 10.0.0.160 8080/TCP 40s
1. Use your Service object to access the Hello World application:
diff --git a/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md b/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md
index 97f8bc793..8c3fda3ce 100644
--- a/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md
+++ b/content/en/docs/tasks/access-application-cluster/port-forward-access-application-cluster.md
@@ -32,7 +32,7 @@ for database debugging.
The output of a successful command verifies that the deployment was created:
- deployment "redis-master" created
+ deployment.apps/redis-master created
View the pod status to check that it is ready:
@@ -68,7 +68,7 @@ for database debugging.
The output of a successful command verifies that the service was created:
- service "redis-master" created
+ service/redis-master created
Check the service created:
diff --git a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md
index 3292d07c0..892f9f55e 100644
--- a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md
+++ b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions.md
@@ -447,8 +447,9 @@ The column's `format` controls the style used when `kubectl` prints the value.
### Subresources
+{{< feature-state state="beta" for_kubernetes_version="1.11" >}}
+
Custom resources support `/status` and `/scale` subresources.
-This feature is __beta__ in v1.11 and enabled by default.
You can disable this feature using the `CustomResourceSubresources` feature gate on
the [kube-apiserver](/docs/admin/kube-apiserver):
@@ -469,7 +470,28 @@ When the status subresource is enabled, the `/status` subresource for the custom
- `PUT` requests to the `/status` subresource only validate the status stanza of the custom resource.
- `PUT`/`POST`/`PATCH` requests to the custom resource ignore changes to the status stanza.
- Any changes to the spec stanza increments the value at `.metadata.generation`.
-- `properties`, `required` and `description` are the only constructs allowed in the root of the CRD OpenAPI validation schema.
+- Only the following constructs are allowed at the root of the CRD OpenAPI validation schema:
+
+ - Description
+ - Example
+ - ExclusiveMaximum
+ - ExclusiveMinimum
+ - ExternalDocs
+ - Format
+ - Items
+ - Maximum
+ - MaxItems
+ - MaxLength
+ - Minimum
+ - MinItems
+ - MinLength
+ - MultipleOf
+ - Pattern
+ - Properties
+ - Required
+ - Title
+ - Type
+ - UniqueItems
#### Scale subresource
diff --git a/content/en/docs/tasks/administer-cluster/access-cluster-services.md b/content/en/docs/tasks/administer-cluster/access-cluster-services.md
index 542cd28d3..76fc058cd 100644
--- a/content/en/docs/tasks/administer-cluster/access-cluster-services.md
+++ b/content/en/docs/tasks/administer-cluster/access-cluster-services.md
@@ -75,7 +75,7 @@ at `https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-l
#### Manually constructing apiserver proxy URLs
As mentioned above, you use the `kubectl cluster-info` command to retrieve the service's proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service's proxy URL:
-`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`service_name[:port_name]`*`/proxy`
+`http://`*`kubernetes_master_address`*`/api/v1/namespaces/`*`namespace_name`*`/services/`*`[https:]service_name[:port_name]`*`/proxy`
If you haven't specified a name for your port, you don't have to specify *port_name* in the URL
@@ -98,6 +98,7 @@ If you haven't specified a name for your port, you don't have to specify *port_n
"unassigned_shards" : 5
}
```
+ * To access the *https* Elasticsearch service health information `_cluster/health?pretty=true`, you would use: `https://104.197.5.247/api/v1/namespaces/kube-system/services/https:elasticsearch-logging/proxy/_cluster/health?pretty=true`
#### Using web browsers to access services running on the cluster
diff --git a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md
index 4eff0d084..47423857b 100644
--- a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md
+++ b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md
@@ -158,7 +158,7 @@ Backing up an etcd cluster can be accomplished in two ways: etcd built-in snapsh
### Built-in snapshot
-etcd supports built-in snapshot, so backing up an etcd cluster is easy. A snapshot may either be taken from a live member with the `etcdctl snapshot save` command or by copying the `member/snap/db` file from an etcd [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir) that is not currently used by an etcd process. `datadir` is located at `$DATA_DIR/member/snap/db`. Taking the snapshot will normally not affect the performance of the member.
+etcd supports built-in snapshot, so backing up an etcd cluster is easy. A snapshot may either be taken from a live member with the `etcdctl snapshot save` command or by copying the `member/snap/db` file from an etcd [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir) that is not currently used by an etcd process. Taking the snapshot will normally not affect the performance of the member.
Below is an example for taking a snapshot of the keyspace served by `$ENDPOINT` to the file `snapshotdb`:
@@ -189,7 +189,7 @@ A reasonable scaling is to upgrade a three-member cluster to a five-member one,
etcd supports restoring from snapshots that are taken from an etcd process of the [major.minor](http://semver.org/) version. Restoring a version from a different patch version of etcd also is supported. A restore operation is employed to recover the data of a failed cluster.
-Before starting the restore operation, a snapshot file must be present. It can either be a snapshot file from a previous backup operation, or from a remaining [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir). `datadir` is located at `$DATA_DIR/member/snap/db`. For more information and examples on restoring a cluster from a snapshot file, see [etcd disaster recovery documentation](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md#restoring-a-cluster).
+Before starting the restore operation, a snapshot file must be present. It can either be a snapshot file from a previous backup operation, or from a remaining [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir). For more information and examples on restoring a cluster from a snapshot file, see [etcd disaster recovery documentation](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md#restoring-a-cluster).
If the access URLs of the restored cluster is changed from the previous cluster, the Kubernetes API server must be reconfigured accordingly. In this case, restart Kubernetes API server with the flag `--etcd-servers=$NEW_ETCD_CLUSTER` instead of the flag `--etcd-servers=$OLD_ETCD_CLUSTER`. Replace `$NEW_ETCD_CLUSTER` and `$OLD_ETCD_CLUSTER` with the respective IP addresses. If a load balancer is used in front of an etcd cluster, you might need to update the load balancer instead.
diff --git a/content/en/docs/tasks/administer-cluster/declare-network-policy.md b/content/en/docs/tasks/administer-cluster/declare-network-policy.md
index 6c471332d..60f2a276f 100644
--- a/content/en/docs/tasks/administer-cluster/declare-network-policy.md
+++ b/content/en/docs/tasks/administer-cluster/declare-network-policy.md
@@ -35,9 +35,9 @@ To see how Kubernetes network policy works, start off by creating an `nginx` dep
```console
$ kubectl run nginx --image=nginx --replicas=2
-deployment "nginx" created
+deployment.apps/nginx created
$ kubectl expose deployment nginx --port=80
-service "nginx" exposed
+service/nginx exposed
```
This runs two `nginx` pods in the default namespace, and exposes them through a service called `nginx`.
@@ -45,12 +45,12 @@ This runs two `nginx` pods in the default namespace, and exposes them through a
```console
$ kubectl get svc,pod
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-svc/kubernetes 10.100.0.1 443/TCP 46m
-svc/nginx 10.100.0.16 80/TCP 33s
+service/kubernetes 10.100.0.1 443/TCP 46m
+service/nginx 10.100.0.16 80/TCP 33s
NAME READY STATUS RESTARTS AGE
-po/nginx-701339712-e0qfq 1/1 Running 0 35s
-po/nginx-701339712-o00ef 1/1 Running 0 35s
+pod/nginx-701339712-e0qfq 1/1 Running 0 35s
+pod/nginx-701339712-o00ef 1/1 Running 0 35s
```
## Test the service by accessing it from another pod
@@ -96,7 +96,7 @@ Use kubectl to create a NetworkPolicy from the above nginx-policy.yaml file:
```console
$ kubectl create -f nginx-policy.yaml
-networkpolicy "access-nginx" created
+networkpolicy.networking.k8s.io/access-nginx created
```
## Test access to the service when access label is not defined
diff --git a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md
index 62d05c19b..23061cbd7 100644
--- a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md
+++ b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md
@@ -28,7 +28,7 @@ Then create a pod using this file and verify its status:
```shell
kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml
-pod "busybox" created
+pod/busybox created
kubectl get pods busybox
NAME READY STATUS RESTARTS AGE
@@ -129,9 +129,9 @@ Verify that the DNS service is up by using the `kubectl get service` command.
```shell
kubectl get svc --namespace=kube-system
-NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...
-kube-dns 10.0.0.10 53/UDP,53/TCP 1h
+kube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP 1h
...
```
diff --git a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
index 1eddf4edd..afdb82945 100644
--- a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
+++ b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
@@ -105,7 +105,7 @@ command to create the Deployment:
The output of a successful command is:
- deployment "kube-dns-autoscaler" created
+ deployment.apps/kube-dns-autoscaler created
DNS horizontal autoscaling is now enabled.
@@ -159,7 +159,7 @@ This option works for all situations. Enter this command:
The output is:
- deployment "kube-dns-autoscaler" scaled
+ deployment.extensions/kube-dns-autoscaler scaled
Verify that the replica count is zero:
@@ -181,7 +181,7 @@ no one will re-create it:
The output is:
- deployment "kube-dns-autoscaler" deleted
+ deployment.extensions "kube-dns-autoscaler" deleted
### Option 3: Delete the kube-dns-autoscaler manifest file from the master node
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md
index 8b4a2ec13..6589c9cbf 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11.md
@@ -20,7 +20,7 @@ This page explains how to upgrade a Kubernetes cluster created with `kubeadm` fr
### Additional information
- All containers are restarted after upgrade, because the container spec hash value is changed.
-- You can upgrade only froom one minor version to the next minor version. That is, you cannot skip versions when you upgrade. For example, you can upgrade only from 1.10 to 1.11, not from 1.9 to 1.11.
+- You can upgrade only from one minor version to the next minor version. That is, you cannot skip versions when you upgrade. For example, you can upgrade only from 1.10 to 1.11, not from 1.9 to 1.11.
- The default DNS provider in version 1.11 is [CoreDNS](https://coredns.io/) rather than [kube-dns](https://github.com/kubernetes/dns).
To keep `kube-dns`, pass `--feature-gates=CoreDNS=false` to `kubeadm upgrade apply`.
@@ -30,7 +30,7 @@ To keep `kube-dns`, pass `--feature-gates=CoreDNS=false` to `kubeadm upgrade app
## Upgrade the control plane
-1. On your master node, run the following (as root:
+1. On your master node, run the following (as root):
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version
export ARCH=amd64 # or: arm, arm64, ppc64le, s390x
@@ -180,7 +180,7 @@ To keep `kube-dns`, pass `--feature-gates=CoreDNS=false` to `kubeadm upgrade app
Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow.
Check the [addons](/docs/concepts/cluster-administration/addons/) page to
- find your CNI provider and see whther additional upgrade steps are required.
+ find your CNI provider and see whether additional upgrade steps are required.
## Upgrade master and node packages
diff --git a/content/en/docs/tasks/administer-cluster/namespaces.md b/content/en/docs/tasks/administer-cluster/namespaces.md
index 77fbc6342..0e24a21dc 100644
--- a/content/en/docs/tasks/administer-cluster/namespaces.md
+++ b/content/en/docs/tasks/administer-cluster/namespaces.md
@@ -26,12 +26,14 @@ $ kubectl get namespaces
NAME STATUS AGE
default Active 11d
kube-system Active 11d
+kube-public Active 11d
```
-Kubernetes starts with two initial namespaces:
+Kubernetes starts with three initial namespaces:
* `default` The default namespace for objects with no other namespace
* `kube-system` The namespace for objects created by the Kubernetes system
+ * `kube-public` This namespace is created automatically and is readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.
You can also get the summary of a specific namespace using:
diff --git a/content/en/docs/tasks/administer-cluster/storage-object-in-use-protection.md b/content/en/docs/tasks/administer-cluster/storage-object-in-use-protection.md
index d5e90dd34..3065c42cf 100644
--- a/content/en/docs/tasks/administer-cluster/storage-object-in-use-protection.md
+++ b/content/en/docs/tasks/administer-cluster/storage-object-in-use-protection.md
@@ -15,8 +15,11 @@ Persistent volume claims (PVCs) that are in active use by a pod and persistent v
{{% capture prerequisites %}}
The Storage Object in Use Protection feature is enabled in one of the below Kubernetes versions:
-- {% assign for_k8s_version = "1.10" %} {% include feature-state-beta.md %}
-- {% assign for_k8s_version = "1.11" %} {% include feature-state-stable.md %}
+
+{{< feature-state for_k8s_version="v1.10" state="beta" >}}
+
+
+{{< feature-state for_k8s_version="v1.11" state="stable" >}}
{{% /capture %}}
@@ -186,7 +189,7 @@ Events:
- Create a second pod that uses the same PVC:
-```
+```yaml
kind: Pod
apiVersion: v1
metadata:
@@ -212,11 +215,11 @@ spec:
- Verify that the scheduling of the second pod fails with the below warning:
-```
+```shell
Warning FailedScheduling 18s (x4 over 21s) default-scheduler persistentvolumeclaim "slzc" is being deleted
```
-- Wait until the pod status of both pods is `Terminated` or `Completed` (either delete the pods or wait until they finish). Afterwards, check that the PVC is removed.
+- Wait until the pod status of both pods is `Terminated` or `Completed` (either delete the pods or wait until they finish). Afterwards, check that the PVC is removed.
## Storage Object in Use Protection feature used for PV Protection
diff --git a/content/en/docs/tasks/administer-federation/ingress.md b/content/en/docs/tasks/administer-federation/ingress.md
index a023a45c8..e89ed92dc 100644
--- a/content/en/docs/tasks/administer-federation/ingress.md
+++ b/content/en/docs/tasks/administer-federation/ingress.md
@@ -195,8 +195,8 @@ You can verify this by checking in each of the underlying clusters. For example:
``` shell
kubectl --context=gce-asia-east1a get services nginx
-NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-nginx 10.63.250.98 104.199.136.89 80/TCP 9m
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+nginx ClusterIP 10.63.250.98 104.199.136.89 80/TCP 9m
```
## Hybrid cloud capabilities
diff --git a/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md b/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md
index 8d03fc86b..cf14d0391 100644
--- a/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md
+++ b/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md
@@ -111,7 +111,7 @@ resources:
Use `kubectl top` to fetch the metrics for the pod:
```shell
-kubectl top pod memory-demo
+kubectl top pod memory-demo --namespace=mem-example
```
The output shows that the Pod is using about 162,900,000 bytes of memory, which
diff --git a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md
index 7d3b0cbf0..b55f0b7eb 100644
--- a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md
+++ b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md
@@ -115,6 +115,17 @@ of `Always`.
```
1. In your shell, goto `/data/redis`, and verify that `test-file` is still there.
+ ```shell
+ root@redis:/data/redis# cd /data/redis/
+ root@redis:/data/redis# ls
+ test-file
+ ```
+
+1. Delete the Pod that you created for this exercise:
+
+ ```shell
+ kubectl delete pod redis
+ ```
{{% /capture %}}
diff --git a/content/en/docs/tasks/configure-pod-container/extended-resource.md b/content/en/docs/tasks/configure-pod-container/extended-resource.md
index 53484c9f4..48fdcf112 100644
--- a/content/en/docs/tasks/configure-pod-container/extended-resource.md
+++ b/content/en/docs/tasks/configure-pod-container/extended-resource.md
@@ -120,9 +120,10 @@ extended-resource-demo-2 0/1 Pending 0 6m
## Clean up
-Delete the Pod that you created for this exercise:
+Delete the Pods that you created for this exercise:
```shell
+kubectl delete pod extended-resource-demo
kubectl delete pod extended-resource-demo-2
```
diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
index fc57fa810..e998baa6a 100644
--- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
+++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
@@ -26,7 +26,9 @@ private Docker registry or repository.
On your laptop, you must authenticate with a registry in order to pull a private image:
- docker login
+```shell
+docker login
+```
When prompted, enter your Docker username and password.
@@ -34,17 +36,21 @@ The login process creates or updates a `config.json` file that holds an authoriz
View the `config.json` file:
- cat ~/.docker/config.json
+```shell
+cat ~/.docker/config.json
+```
The output contains a section similar to this:
- {
- "auths": {
- "https://index.docker.io/v1/": {
- "auth": "c3R...zE2"
- }
+```json
+{
+ "auths": {
+ "https://index.docker.io/v1/": {
+ "auth": "c3R...zE2"
}
}
+}
+```
{{< note >}}
**Note:** If you use a Docker credentials store, you won't see that `auth` entry but a `credsStore` entry with the name of the store as value.
@@ -56,7 +62,9 @@ A Kubernetes cluster uses the Secret of `docker-registry` type to authenticate w
Create this Secret, naming it `regcred`:
- kubectl create secret docker-registry regcred --docker-server= --docker-username= --docker-password= --docker-email=
+```shell
+kubectl create secret docker-registry regcred --docker-server= --docker-username= --docker-password= --docker-email=
+```
where:
@@ -71,38 +79,50 @@ You have successfully set your Docker credentials in the cluster as a Secret cal
To understand the contents of the `regcred` Secret you just created, start by viewing the Secret in YAML format:
- kubectl get secret regcred --output=yaml
+```shell
+kubectl get secret regcred --output=yaml
+```
The output is similar to this:
- apiVersion: v1
- data:
- .dockerconfigjson: eyJodHRwczovL2luZGV4L ... J0QUl6RTIifX0=
- kind: Secret
- metadata:
- ...
- name: regcred
- ...
- type: kubernetes.io/dockerconfigjson
+```yaml
+apiVersion: v1
+data:
+ .dockerconfigjson: eyJodHRwczovL2luZGV4L ... J0QUl6RTIifX0=
+kind: Secret
+metadata:
+ ...
+ name: regcred
+ ...
+type: kubernetes.io/dockerconfigjson
+```
The value of the `.dockerconfigjson` field is a base64 representation of your Docker credentials.
To understand what is in the `.dockerconfigjson` field, convert the secret data to a
readable format:
- kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
+```shell
+kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
+```
The output is similar to this:
- {"auths":{"yourprivateregistry.com":{"username":"janedoe","password":"xxxxxxxxxxx","email":"jdoe@example.com","auth":"c3R...zE2"}}}
+```json
+{"auths":{"yourprivateregistry.com":{"username":"janedoe","password":"xxxxxxxxxxx","email":"jdoe@example.com","auth":"c3R...zE2"}}}
+```
To understand what is in the `auth` field, convert the base64-encoded data to a readable format:
- echo "c3R...zE2" | base64 --decode
+```shell
+echo "c3R...zE2" | base64 --decode
+```
The output, username and password concatenated with a `:`, is similar to this:
- janedoe:xxxxxxxxxxx
+```none
+janedoe:xxxxxxxxxxx
+```
Notice that the Secret data contains the authorization token similar to your local `~/.docker/config.json` file.
@@ -116,19 +136,25 @@ Here is a configuration file for a Pod that needs access to your Docker credenti
Download the above file:
- wget -O my-private-reg-pod.yaml https://k8s.io/examples/pods/private-reg-pod.yaml
+```shell
+wget -O my-private-reg-pod.yaml https://k8s.io/examples/pods/private-reg-pod.yaml
+```
In file `my-private-reg-pod.yaml`, replace `` with the path to an image in a private registry such as:
- janedoe/jdoe-private:v1
+```none
+janedoe/jdoe-private:v1
+```
To pull the image from the private registry, Kubernetes needs credentials.
The `imagePullSecrets` field in the configuration file specifies that Kubernetes should get the credentials from a Secret named `regcred`.
Create a Pod that uses your Secret, and verify that the Pod is running:
- kubectl create -f my-private-reg-pod.yaml
- kubectl get pod private-reg
+```shell
+kubectl create -f my-private-reg-pod.yaml
+kubectl get pod private-reg
+```
{{% /capture %}}
diff --git a/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md b/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md
index fbb7f95d1..ca5b4c8e3 100644
--- a/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md
+++ b/content/en/docs/tasks/debug-application-cluster/debug-application-introspection.md
@@ -28,7 +28,7 @@ Create deployment by running following command:
```shell
$ kubectl create -f https://k8s.io/examples/application/nginx-with-request.yaml
-deployment "nginx-deployment" created
+deployment.apps/nginx-deployment created
```
```shell
@@ -257,11 +257,11 @@ Sometimes when debugging it can be useful to look at the status of a node -- for
```shell
$ kubectl get nodes
-NAME STATUS AGE VERSION
-kubernetes-node-861h NotReady 1h v1.6.0+fff5156
-kubernetes-node-bols Ready 1h v1.6.0+fff5156
-kubernetes-node-st6x Ready 1h v1.6.0+fff5156
-kubernetes-node-unaj Ready 1h v1.6.0+fff5156
+NAME STATUS ROLES AGE VERSION
+kubernetes-node-861h NotReady 1h v1.11.1
+kubernetes-node-bols Ready 1h v1.11.1
+kubernetes-node-st6x Ready 1h v1.11.1
+kubernetes-node-unaj Ready 1h v1.11.1
```
```shell
diff --git a/content/en/docs/tasks/debug-application-cluster/debug-service.md b/content/en/docs/tasks/debug-application-cluster/debug-service.md
index 9b9c4ab8f..42f52513c 100644
--- a/content/en/docs/tasks/debug-application-cluster/debug-service.md
+++ b/content/en/docs/tasks/debug-application-cluster/debug-service.md
@@ -75,7 +75,7 @@ $ kubectl run hostnames --image=k8s.gcr.io/serve_hostname \
--labels=app=hostnames \
--port=9376 \
--replicas=3
-deployment "hostnames" created
+deployment.apps/hostnames created
```
`kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands.
@@ -134,6 +134,7 @@ So the first thing to check is whether that `Service` actually exists:
```shell
$ kubectl get svc hostnames
+No resources found.
Error from server (NotFound): services "hostnames" not found
```
@@ -142,15 +143,15 @@ walk-through - you can use your own `Service`'s details here.
```shell
$ kubectl expose deployment hostnames --port=80 --target-port=9376
-service "hostnames" exposed
+service/hostnames exposed
```
And read it back, just to be sure:
```shell
$ kubectl get svc hostnames
-NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-hostnames 10.0.1.175 80/TCP 5s
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+hostnames ClusterIP 10.0.1.175 80/TCP 5s
```
As before, this is the same as if you had started the `Service` with YAML:
diff --git a/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md b/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md
index 7a0516ba2..fd6b0bff7 100644
--- a/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md
+++ b/content/en/docs/tasks/debug-application-cluster/logging-stackdriver.md
@@ -176,7 +176,7 @@ and then recreating it:
```shell
$ kubectl create -f https://k8s.io/examples/debug/counter-pod.yaml
-pod "counter" created
+pod/counter created
```
After some time, you can access logs from the counter pod again:
diff --git a/content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md b/content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md
index 115dc3ecf..a4855d8dd 100644
--- a/content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md
+++ b/content/en/docs/tasks/debug-application-cluster/resource-usage-monitoring.md
@@ -7,8 +7,8 @@ title: Tools for Monitoring Compute, Storage, and Network Resources
{{% capture overview %}}
-To scale and application and provide a reliable service, you need to
-understand how an application behaves when it is deployed. You can examine
+To scale an application and provide a reliable service, you need to
+understand how the application behaves when it is deployed. You can examine
application performance in a Kubernetes cluster by examining the containers,
[pods](/docs/user-guide/pods), [services](/docs/user-guide/services), and
the characteristics of the overall cluster. Kubernetes provides detailed
diff --git a/content/en/docs/tasks/federation/federation-service-discovery.md b/content/en/docs/tasks/federation/federation-service-discovery.md
index fc905d76a..6992a54bd 100644
--- a/content/en/docs/tasks/federation/federation-service-discovery.md
+++ b/content/en/docs/tasks/federation/federation-service-discovery.md
@@ -99,8 +99,8 @@ You can verify this by checking in each of the underlying clusters, for example:
``` shell
kubectl --context=gce-asia-east1a get services nginx
-NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-nginx 10.63.250.98 104.199.136.89 80/TCP 9m
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+nginx ClusterIP 10.63.250.98 104.199.136.89 80/TCP 9m
```
The above assumes that you have a context named 'gce-asia-east1a'
@@ -138,7 +138,7 @@ underlying Kubernetes services (once these have been allocated - this
may take a few seconds). For inter-cluster and inter-cloud-provider
networking between service shards to work correctly, your services
need to have an externally visible IP address. [Service Type:
-Loadbalancer](/docs/concepts/services-networking/service/#type-loadbalancer)
+Loadbalancer](/docs/concepts/services-networking/service/#loadbalancer)
is typically used for this, although other options
(e.g. [External IP's](/docs/concepts/services-networking/service/#external-ips)) exist.
diff --git a/content/en/docs/tasks/federation/set-up-cluster-federation-kubefed.md b/content/en/docs/tasks/federation/set-up-cluster-federation-kubefed.md
index 3b9be17f3..abf9388ae 100644
--- a/content/en/docs/tasks/federation/set-up-cluster-federation-kubefed.md
+++ b/content/en/docs/tasks/federation/set-up-cluster-federation-kubefed.md
@@ -279,11 +279,11 @@ kubefed init fellowship \
`kubefed init` exposes the federation API server as a Kubernetes
[service](/docs/concepts/services-networking/service/) on the host cluster. By default,
this service is exposed as a
-[load balanced service](/docs/concepts/services-networking/service/#type-loadbalancer).
+[load balanced service](/docs/concepts/services-networking/service/#loadbalancer).
Most on-premises and bare-metal environments, and some cloud
environments lack support for load balanced services. `kubefed init`
allows exposing the federation API server as a
-[`NodePort` service](/docs/concepts/services-networking/service/#type-nodeport) on
+[`NodePort` service](/docs/concepts/services-networking/service/#nodeport) on
such environments. This can be accomplished by passing
the `--api-server-service-type=NodePort` flag. You can also specify
the preferred address to advertise the federation API server by
diff --git a/content/en/docs/tasks/inject-data-application/define-command-argument-container.md b/content/en/docs/tasks/inject-data-application/define-command-argument-container.md
index a1e0a0bae..d920dacdf 100644
--- a/content/en/docs/tasks/inject-data-application/define-command-argument-container.md
+++ b/content/en/docs/tasks/inject-data-application/define-command-argument-container.md
@@ -34,6 +34,11 @@ override the default command and arguments provided by the container image.
If you define args, but do not define a command, the default command is used
with your new arguments.
+{{< note >}}
+**Note:** the `command` field corresponds to `entrypoint` in some container
+runtimes. Refer to the [Notes](#notes) below.
+{{< /note >}}
+
In this exercise, you create a Pod that runs one container. The configuration
file for the Pod defines a command and two arguments:
@@ -132,7 +137,6 @@ Here are some examples:
{{% capture whatsnext %}}
-* Learn more about [containers and commands](/docs/user-guide/containers/).
* Learn more about [configuring pods and containers](/docs/tasks/).
* Learn more about [running commands in a container](/docs/tasks/debug-application-cluster/get-shell-running-container/).
* See [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core).
diff --git a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md
index 8cf0d527c..bf65daa13 100644
--- a/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md
+++ b/content/en/docs/tasks/job/fine-parallel-processing-work-queue.md
@@ -55,9 +55,9 @@ directory and start a temporary Pod running Redis and a service so we can find i
```shell
$ cd content/en/examples/application/job/redis
$ kubectl create -f ./redis-pod.yaml
-pod "redis-master" created
+pod/redis-master created
$ kubectl create -f ./redis-service.yaml
-service "redis" created
+service/redis created
```
If you're not working from the source tree, you could also download the following
diff --git a/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md b/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md
index 323cdd57e..76fc714da 100644
--- a/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md
+++ b/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md
@@ -67,6 +67,12 @@ If you're using any version of kubectl <= 1.4, you should omit the `--force` opt
kubectl delete pods --grace-period=0
```
+If even after these commands the pod is stuck on `Unknown` state, use the following command to remove the pod from the cluster:
+
+```shell
+kubectl patch pod -p '{"metadata":{"finalizers":null}}'
+```
+
Always perform force deletion of StatefulSet Pods carefully and with complete knowledge of the risks involved.
{{% /capture %}}
diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
index d6dc3b0a0..343582acb 100644
--- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
+++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
@@ -68,8 +68,8 @@ First, we will start a deployment running the image and expose it as a service:
```shell
$ kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80
-service "php-apache" created
-deployment "php-apache" created
+service/php-apache created
+deployment.apps/php-apache created
```
## Create Horizontal Pod Autoscaler
@@ -85,7 +85,7 @@ See [here](https://git.k8s.io/community/contributors/design-proposals/autoscalin
```shell
$ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
-deployment "php-apache" autoscaled
+horizontalpodautoscaler.autoscaling/php-apache autoscaled
```
We may check the current status of autoscaler by running:
@@ -391,7 +391,7 @@ We will create the autoscaler by executing the following command:
```shell
$ kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml
-horizontalpodautoscaler "php-apache" created
+horizontalpodautoscaler.autoscaling/php-apache created
```
{{% /capture %}}
diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
index a2ae359d1..8ff9bf253 100644
--- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
+++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
@@ -60,9 +60,16 @@ or the custom metrics API (for all other metrics).
* For object metrics, a single metric is fetched (which describes the object
in question), and compared to the target value, to produce a ratio as above.
-The HorizontalPodAutoscaler normally fetches metrics from a series of aggregated APIs (`metrics.k8s.io`,\
-`custom.metrics.k8s.io`, and `external.metrics.k8s.io`). It can also fetch metrics directly
-from Heapster. Fetching metrics from Heapster is deprecated as of Kubernetes 1.11.
+The HorizontalPodAutoscaler normally fetches metrics from a series of aggregated APIs (`metrics.k8s.io`,
+`custom.metrics.k8s.io`, and `external.metrics.k8s.io`). The `metrics.k8s.io` API is usually provided by
+metrics-server, which needs to be launched separately. See
+[metrics-server](https://kubernetes.io/docs/tasks/debug-application-cluster/core-metrics-pipeline/#metrics-server)
+for instructions. The HorizontalPodAutoscaler can also fetch metrics directly from Heapster.
+
+{{< note >}}
+{{< feature-state state="deprecated" for_k8s_version="1.11" >}}
+Fetching metrics from Heapster is deprecated as of Kubernetes 1.11.
+{{< /note >}}
See [Support for metrics APIs](#support-for-metrics-apis) for more details.
@@ -92,8 +99,8 @@ We can list autoscalers by `kubectl get hpa` and get detailed description by `ku
Finally, we can delete an autoscaler using `kubectl delete hpa`.
In addition, there is a special `kubectl autoscale` command for easy creation of a Horizontal Pod Autoscaler.
-For instance, executing `kubectl autoscale rc foo --min=2 --max=5 --cpu-percent=80`
-will create an autoscaler for replication controller *foo*, with target CPU utilization set to `80%`
+For instance, executing `kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80`
+will create an autoscaler for replication set *foo*, with target CPU utilization set to `80%`
and the number of replicas between 2 and 5.
The detailed documentation of `kubectl autoscale` can be found [here](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
diff --git a/content/en/docs/tasks/tools/install-kubectl.md b/content/en/docs/tasks/tools/install-kubectl.md
index d7d2ec444..309e91c98 100644
--- a/content/en/docs/tasks/tools/install-kubectl.md
+++ b/content/en/docs/tasks/tools/install-kubectl.md
@@ -65,6 +65,14 @@ kubectl is available as a [snap](https://snapcraft.io/) application.
2. Run `kubectl version` to verify that the version you've installed is sufficiently up-to-date.
+## Install with Macports on macOS
+
+1. If you are on macOS and using [Macports](https://macports.org/) package manager, you can install with:
+
+ port install kubectl
+
+2. Run `kubectl version` to verify that the version you've installed is sufficiently up-to-date.
+
## Install with Powershell from PSGallery
1. If you are on Windows and using [Powershell Gallery](https://www.powershellgallery.com/) package manager, you can install and update with:
@@ -264,7 +272,7 @@ fi
Or when using [Oh-My-Zsh](http://ohmyz.sh/), edit the ~/.zshrc file and update the `plugins=` line to include the kubectl plugin.
```shell
-source <(kubectl completion zsh)
+plugins=(kubectl)
```
{{% /capture %}}
diff --git a/content/en/docs/test.md b/content/en/docs/test.md
index e0c916e3c..ec00b0fef 100644
--- a/content/en/docs/test.md
+++ b/content/en/docs/test.md
@@ -214,6 +214,16 @@ Common languages used in Kubernetes documentation code blocks include:
- `xml`
- `none` (disables syntax highlighting for the block)
+### Code blocks containing Hugo shortcodes
+
+To show raw Hugo shortcodes as in the above example and prevent Hugo
+from interpreting them, use C-style comments directly after the `<` and before
+the `>` characters. The following example illustrates this (view the Markdown
+source for this page).
+
+```none
+{{* codenew file="pods/storage/gce-volume.yaml" */>}}
+```
## Links
diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md
index 7f02a333a..745df7e09 100644
--- a/content/en/docs/tutorials/hello-minikube.md
+++ b/content/en/docs/tutorials/hello-minikube.md
@@ -262,9 +262,9 @@ kubectl get services
Output:
```shell
-NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-hello-node 10.0.0.71 8080/TCP 6m
-kubernetes 10.0.0.1 443/TCP 14d
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+hello-node ClusterIP 10.0.0.71 8080/TCP 6m
+kubernetes ClusterIP 10.0.0.1 443/TCP 14d
```
The `--type=LoadBalancer` flag indicates that you want to expose your Service
@@ -362,13 +362,13 @@ Output:
```shell
NAME READY STATUS RESTARTS AGE
-po/heapster-zbwzv 1/1 Running 0 2m
-po/influxdb-grafana-gtht9 2/2 Running 0 2m
+pod/heapster-zbwzv 1/1 Running 0 2m
+pod/influxdb-grafana-gtht9 2/2 Running 0 2m
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-svc/heapster NodePort 10.0.0.52 80:31655/TCP 2m
-svc/monitoring-grafana NodePort 10.0.0.33 80:30002/TCP 2m
-svc/monitoring-influxdb ClusterIP 10.0.0.43 8083/TCP,8086/TCP 2m
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+service/heapster NodePort 10.0.0.52 80:31655/TCP 2m
+service/monitoring-grafana NodePort 10.0.0.33 80:30002/TCP 2m
+service/monitoring-influxdb ClusterIP 10.0.0.43 8083/TCP,8086/TCP 2m
```
Open the endpoint to interacting with heapster in a browser:
diff --git a/content/en/docs/tutorials/k8s101.md b/content/en/docs/tutorials/k8s101.md
index ca7547311..0dedd9f3d 100644
--- a/content/en/docs/tutorials/k8s101.md
+++ b/content/en/docs/tutorials/k8s101.md
@@ -3,17 +3,32 @@ reviewers:
- eparis
- mikedanese
title: Kubernetes 101
+content_template: templates/tutorial
---
-{{< toc >}}
-
-## Kubectl CLI and Pods
+{{% capture overview %}}
For Kubernetes 101, we will cover kubectl, Pods, Volumes, and multiple containers.
-{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
+{{% /capture %}}
+
+{{% capture objectives %}}
+
+* What is `kubectl`.
+* Manage a Pod.
+* Create and mount a volume.
+* Create multiple containers in a Pod.
+
+{{% /capture %}}
+
+{{% capture prerequisites %}}
-In order for the kubectl usage examples to work, make sure you have an example directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or the latest `.yaml` files located [here](https://github.com/kubernetes/website/tree/master/content/en/docs/tutorials).
+* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
+* In order for the kubectl usage examples to work, make sure you have an example directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or the latest `.yaml` files located [here](https://github.com/kubernetes/website/tree/master/content/en/docs/tutorials).
+
+{{% /capture %}}
+
+{{% capture lessoncontent %}}
## Kubectl CLI
@@ -46,13 +61,13 @@ For more information, see [Kubernetes Design Documents and Proposals](https://gi
Create a Pod containing an nginx server ([simple-pod.yaml](/examples/pods/simple-pod.yaml)):
```shell
-$ kubectl create -f https://k8s.io/examples/pods/simple-pod.yaml
+kubectl create -f https://k8s.io/examples/pods/simple-pod.yaml
```
List all Pods:
```shell
-$ kubectl get pods
+kubectl get pods
```
On most providers, the Pod IPs are not externally accessible. The easiest way to test that the pod is working is to create a busybox Pod and exec commands on it remotely. For more information, see [Get a Shell to a Running Container](/docs/tasks/debug-application-cluster/get-shell-running-container/).
@@ -60,17 +75,19 @@ On most providers, the Pod IPs are not externally accessible. The easiest way to
If the IP of the Pod is accessible, you can access its http endpoint with wget on port 80:
```shell
-$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1 --env "POD_IP=$(kubectl get pod nginx -o go-template='{{.status.podIP}}')"
+kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1 --env "POD_IP=$(kubectl get pod nginx -o go-template='{{.status.podIP}}')"
u@busybox$ wget -qO- http://$POD_IP # Run in the busybox container
u@busybox$ exit # Exit the busybox container
-$ kubectl delete pod busybox # Clean up the pod we created with "kubectl run"
+```
+```shell
+kubectl delete pod busybox # Clean up the pod we created with "kubectl run"
```
To delete a Pod named nginx:
```shell
-$ kubectl delete pod nginx
+kubectl delete pod nginx
```
@@ -119,8 +136,9 @@ For more information, see [Volumes](/docs/concepts/storage/volumes/).
#### Multiple Containers
-_Note:
-The examples below are syntactically correct, but some of the images (e.g. kubernetes/git-monitor) don't exist yet. We're working on turning these into working examples._
+{{< note >}}
+**Note:** The examples below are syntactically correct, but some of the images (e.g. kubernetes/git-monitor) don't exist yet. We're working on turning these into working examples.
+{{< /note >}}
However, often you want to have two different containers that work together. An example of this would be a web server, and a helper job that polls a git repository for new updates:
@@ -155,8 +173,11 @@ Note that we have also added a Volume here. In this case, the Volume is mounted
Finally, we have also introduced an environment variable to the `git-monitor` container, which allows us to parameterize that container with the particular git repository that we want to track.
+{{% /capture %}}
-## What's Next?
+{{% capture whatsnext %}}
Continue on to [Kubernetes 201](/docs/tutorials/k8s201/) or
for a complete application see the [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)
+
+{{% /capture %}}
diff --git a/content/en/docs/tutorials/k8s201.md b/content/en/docs/tutorials/k8s201.md
index fe62d1a04..7c41d8864 100644
--- a/content/en/docs/tutorials/k8s201.md
+++ b/content/en/docs/tutorials/k8s201.md
@@ -1,239 +1,266 @@
----
-reviewers:
-- janetkuo
-- mikedanese
-title: Kubernetes 201
----
-
-{{< toc >}}
-
-## Labels, Deployments, Services and Health Checking
-
-If you went through [Kubernetes 101](/docs/tutorials/k8s101/), you learned about kubectl, Pods, Volumes, and multiple containers.
-For Kubernetes 201, we will pick up where 101 left off and cover some slightly more advanced topics in Kubernetes, related to application productionization, Deployment and
-scaling.
-
-{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
-
-In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
-
-
-## Labels
-
-Having already learned about Pods and how to create them, you may be struck by an urge to create many, many Pods. Please do! But eventually you will need a system to organize these Pods into groups. The system for achieving this in Kubernetes is Labels. Labels are key-value pairs that are attached to each object in Kubernetes. Label selectors can be passed along with a RESTful `list` request to the apiserver to retrieve a list of objects which match that label selector.
-
-To add a label, add a labels section under metadata in the Pod definition:
-
-```yaml
- labels:
- env: test
-```
-
-For example, here is the nginx Pod definition with labels ([pod-nginx.yaml](/examples/pods/pod-nginx.yaml)):
-
-{{< codenew file="pods/pod-nginx.yaml" >}}
-
-Create the labeled Pod:
-
-```shell
-kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml
-```
-
-List all Pods with the label `env=test`:
-
-```shell
-kubectl get pods -l env=test
-```
-
-Delete the Pod by label:
-
-```shell
-kubectl delete pod -l env=test
-```
-
-For more information, see [Labels](/docs/concepts/overview/working-with-objects/labels/).
-They are a core concept used by two additional Kubernetes building blocks: Deployments and Services.
-
-
-## Deployments
-
-Now that you know how to make awesome, multi-container, labeled Pods and you want to use them to build an application, you might be tempted to just start building a whole bunch of individual Pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of Pods up or down? How will you roll out a new release?
-
-The answer to those questions and more is to use a [Deployment](/docs/concepts/workloads/controllers/deployment/) to manage maintaining and updating your running _Pods_.
-
-A Deployment object defines a Pod creation template (a "cookie-cutter" if you will) and desired replica count. The Deployment uses a label selector to identify the Pods it manages, and will create or delete Pods as needed to meet the replica count. Deployments are also used to manage safely rolling out changes to your running Pods.
-
-Here is a Deployment that instantiates two nginx Pods:
-
-{{< codenew file="application/deployment.yaml" >}}
-
-
-### Deployment Management
-
-Create an nginx Deployment:
-
-```shell
-kubectl create -f https://k8s.io/examples/application/deployment.yaml
-```
-
-List all Deployments:
-
-```shell
-kubectl get deployment
-```
-
-List the Pods created by the Deployment:
-
-```shell
-kubectl get pods -l app=nginx
-```
-
-Upgrade the nginx container from 1.7.9 to 1.8 by changing the Deployment and calling `apply`. The following config
-contains the desired changes:
-
-{{< codenew file="application/deployment-update.yaml" >}}
-
-```shell
-kubectl apply -f https://k8s.io/examples/application/deployment-update.yaml
-```
-
-Watch the Deployment create Pods with new names and delete the old Pods:
-
-```shell
-kubectl get pods -l app=nginx
-```
-
-Delete the Deployment by name:
-
-```shell
-kubectl delete deployment nginx-deployment
-```
-
-For more information, such as how to rollback Deployment changes to a previous version, see [_Deployments_](/docs/concepts/workloads/controllers/deployment/).
-
-
-## Services
-
-Once you have a replicated set of Pods, you need an abstraction that enables connectivity between the layers of your application. For example, if you have a Deployment managing your backend jobs, you don't want to have to reconfigure your front-ends whenever you re-scale your backends. Likewise, if the Pods in your backends are scheduled (or rescheduled) onto different machines, you can't be required to re-configure your front-ends. In Kubernetes, the service abstraction achieves these goals. A service provides a way to refer to a set of Pods (selected by labels) with a single static IP address. It may also provide load balancing, if supported by the provider.
-
-For example, here is a service that balances across the Pods created in the previous nginx Deployment example ([service.yaml](/examples/service/nginx-service.yaml)):
-
-{{< codenew file="service/nginx-service.yaml" >}}
-
-
-### Service Management
-
-Create an nginx Service:
-
-```shell
-kubectl create -f https://k8s.io/examples/service/nginx-service.yaml
-```
-
-List all services:
-
-```shell
-kubectl get services
-```
-
-On most providers, the service IPs are not externally accessible. The easiest way to test that the service is working is to create a busybox Pod and exec commands on it remotely. See the [command execution documentation](/docs/user-guide/kubectl-overview/) for details.
-
-Provided the service IP is accessible, you should be able to access its http endpoint with wget on the exposed port:
-
-```shell
-
-$ export SERVICE_IP=$(kubectl get service nginx-service -o go-template='{{.spec.clusterIP}}')
-$ export SERVICE_PORT=$(kubectl get service nginx-service -o go-template='{{(index .spec.ports 0).port}}')
-$ echo "$SERVICE_IP:$SERVICE_PORT"
-$ kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Never --tty -i --env "SERVICE_IP=$SERVICE_IP" --env "SERVICE_PORT=$SERVICE_PORT"
-u@busybox$ wget -qO- http://$SERVICE_IP:$SERVICE_PORT # Run in the busybox container
-u@busybox$ exit # Exit the busybox container
-$ kubectl delete pod busybox # Clean up the pod we created with "kubectl run"
-
-```
-
-The service definition [exposed the Nginx Service](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) as port 8000 (`$SERVCE_PORT`). We can also access the service from a host running Kubernetes using that port:
-
-```shell
-wget -qO- http://$SERVICE_IP:$SERVICE_PORT # Run on a Kubernetes host
-```
-
-(This works on AWS with Weave.)
-
-To delete the service by name:
-
-```shell
-kubectl delete service nginx-service
-```
-
-When created, each service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the service, and know that communication to the service will be automatically load-balanced out to some Pod that is a member of the set identified by the label selector in the Service.
-
-For more information, see [Services](/docs/concepts/services-networking/service/).
-
-
-## Health Checking
-
-When I write code it never crashes, right? Sadly the [Kubernetes issues list](https://github.com/kubernetes/kubernetes/issues) indicates otherwise...
-
-Rather than trying to write bug-free code, a better approach is to use a management system to perform periodic health checking
-and repair of your application. That way a system outside of your application itself is responsible for monitoring the
-application and taking action to fix it. It's important that the system be outside of the application, since if
-your application fails and the health checking agent is part of your application, it may fail as well and you'll never know.
-In Kubernetes, the health check monitor is the Kubelet agent.
-
-### Process Health Checking
-
-The simplest form of health-checking is just process level health checking. The Kubelet constantly asks the Docker daemon
-if the container process is still running, and if not, the container process is restarted. In all of the Kubernetes examples
-you have run so far, this health checking was actually already enabled. It's on for every single container that runs in
-Kubernetes.
-
-### Application Health Checking
-
-However, in many cases this low-level health checking is insufficient. Consider, for example, the following code:
-
-```go
-lockOne := sync.Mutex{}
-lockTwo := sync.Mutex{}
-
-go func() {
- lockOne.Lock();
- lockTwo.Lock();
- ...
-}()
-
-lockTwo.Lock();
-lockOne.Lock();
-```
-
-This is a classic example of a problem in computer science known as ["Deadlock"](https://en.wikipedia.org/wiki/Deadlock). From Docker's perspective your application is
-still operating and the process is still running, but from your application's perspective your code is locked up and will never respond correctly.
-
-To address this problem, Kubernetes supports user implemented application health-checks. These checks are performed by the
-Kubelet to ensure that your application is operating correctly for a definition of "correctly" that _you_ provide.
-
-Currently, there are three types of application health checks that you can choose from:
-
- * HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise. See health check examples [here](/docs/user-guide/liveness/).
- * Container Exec - The Kubelet will execute a command inside your container. If it exits with status 0 it will be considered a success. See health check examples [here](/docs/user-guide/liveness/).
- * TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can't it is considered a failure.
-
-In all cases, if the Kubelet discovers a failure the container is restarted.
-
-The container health checks are configured in the `livenessProbe` section of your container config. There you can also specify an `initialDelaySeconds` that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.
-
-Here is an example config for a Pod with an HTTP health check
-([pod-with-http-healthcheck.yaml](/examples/pods/probe/pod-with-http-healthcheck.yaml)):
-
-{{< codenew file="pods/probe/pod-with-http-healthcheck.yaml" >}}
-
-And here is an example config for a Pod with a TCP Socket health check
-([pod-with-tcp-socket-healthcheck.yaml](/examples/pods/probe/pod-with-tcp-socket-healthcheck.yaml)):
-
-{{< codenew file="pods/probe/pod-with-tcp-socket-healthcheck.yaml" >}}
-
-For more information about health checking, see [Container Probes](/docs/user-guide/pod-states/#container-probes).
-
-
-## What's Next?
-
-For a complete application see the [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/).
+---
+reviewers:
+- janetkuo
+- mikedanese
+title: Kubernetes 201
+content_template: templates/tutorial
+---
+
+
+{{% capture overview %}}
+
+For Kubernetes 201, we will pick up where 101 left off and cover some slightly more advanced topics in Kubernetes, related to application productionization, Deployment and scaling.
+
+If you went through [Kubernetes 101](/docs/tutorials/k8s101/), you learned about kubectl, Pods, Volumes, and multiple containers.
+
+{{% /capture %}}
+
+{{% capture objectives %}}
+
+* Add labels to the Pod.
+* Manage a Deployment.
+* Manage a Service.
+* What is the health checking.
+
+{{% /capture %}}
+
+{{% capture prerequisites %}}
+
+* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
+* In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
+
+{{% /capture %}}
+
+{{% capture lessoncontent %}}
+
+## Labels
+
+Having already learned about Pods and how to create them, you may be struck by an urge to create many, many Pods. Please do! But eventually you will need a system to organize these Pods into groups. The system for achieving this in Kubernetes is Labels. Labels are key-value pairs that are attached to each object in Kubernetes. Label selectors can be passed along with a RESTful `list` request to the apiserver to retrieve a list of objects which match that label selector.
+
+To add a label, add a labels section under metadata in the Pod definition:
+
+```yaml
+ labels:
+ env: test
+```
+
+For example, here is the nginx Pod definition with labels ([pod-nginx.yaml](/examples/pods/pod-nginx.yaml)):
+
+{{< codenew file="pods/pod-nginx.yaml" >}}
+
+Create the labeled Pod:
+
+```shell
+kubectl create -f https://k8s.io/examples/pods/pod-nginx.yaml
+```
+
+List all Pods with the label `env=test`:
+
+```shell
+kubectl get pods -l env=test
+```
+
+Delete the Pod by label:
+
+```shell
+kubectl delete pod -l env=test
+```
+
+For more information, see [Labels](/docs/concepts/overview/working-with-objects/labels/).
+They are a core concept used by two additional Kubernetes building blocks: Deployments and Services.
+
+
+## Deployments
+
+Now that you know how to make awesome, multi-container, labeled Pods and you want to use them to build an application, you might be tempted to just start building a whole bunch of individual Pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of Pods up or down? How will you roll out a new release?
+
+The answer to those questions and more is to use a [Deployment](/docs/concepts/workloads/controllers/deployment/) to manage maintaining and updating your running _Pods_.
+
+A Deployment object defines a Pod creation template (a "cookie-cutter" if you will) and desired replica count. The Deployment uses a label selector to identify the Pods it manages, and will create or delete Pods as needed to meet the replica count. Deployments are also used to manage safely rolling out changes to your running Pods.
+
+Here is a Deployment that instantiates two nginx Pods:
+
+{{< codenew file="application/deployment.yaml" >}}
+
+
+### Deployment Management
+
+Create an nginx Deployment:
+
+```shell
+kubectl create -f https://k8s.io/examples/application/deployment.yaml
+```
+
+List all Deployments:
+
+```shell
+kubectl get deployment
+```
+
+List the Pods created by the Deployment:
+
+```shell
+kubectl get pods -l app=nginx
+```
+
+Upgrade the nginx container from 1.7.9 to 1.8 by changing the Deployment and calling `apply`. The following config
+contains the desired changes:
+
+{{< codenew file="application/deployment-update.yaml" >}}
+
+```shell
+kubectl apply -f https://k8s.io/examples/application/deployment-update.yaml
+```
+
+Watch the Deployment create Pods with new names and delete the old Pods:
+
+```shell
+kubectl get pods -l app=nginx
+```
+
+Delete the Deployment by name:
+
+```shell
+kubectl delete deployment nginx-deployment
+```
+
+For more information, such as how to rollback Deployment changes to a previous version, see [_Deployments_](/docs/concepts/workloads/controllers/deployment/).
+
+
+## Services
+
+Once you have a replicated set of Pods, you need an abstraction that enables connectivity between the layers of your application. For example, if you have a Deployment managing your backend jobs, you don't want to have to reconfigure your front-ends whenever you re-scale your backends. Likewise, if the Pods in your backends are scheduled (or rescheduled) onto different machines, you can't be required to re-configure your front-ends. In Kubernetes, the service abstraction achieves these goals. A service provides a way to refer to a set of Pods (selected by labels) with a single static IP address. It may also provide load balancing, if supported by the provider.
+
+For example, here is a service that balances across the Pods created in the previous nginx Deployment example ([service.yaml](/examples/service/nginx-service.yaml)):
+
+{{< codenew file="service/nginx-service.yaml" >}}
+
+
+### Service Management
+
+Create an nginx Service:
+
+```shell
+kubectl create -f https://k8s.io/examples/service/nginx-service.yaml
+```
+
+List all services:
+
+```shell
+kubectl get services
+```
+
+On most providers, the service IPs are not externally accessible. The easiest way to test that the service is working is to create a busybox Pod and exec commands on it remotely. See the [command execution documentation](/docs/user-guide/kubectl-overview/) for details.
+
+Provided the service IP is accessible, you should be able to access its http endpoint with wget on the exposed port:
+
+```shell
+export SERVICE_IP=$(kubectl get service nginx-service -o go-template='{{.spec.clusterIP}}')
+export SERVICE_PORT=$(kubectl get service nginx-service -o go-template='{{(index .spec.ports 0).port}}')
+```
+
+Check `$SERVICE_IP` and `$SERVICE_PORT`:
+
+```shell
+echo "$SERVICE_IP:$SERVICE_PORT"
+```
+
+Then, create a busybox Pod:
+```shell
+kubectl run busybox --generator=run-pod/v1 --image=busybox --restart=Never --tty -i --env "SERVICE_IP=$SERVICE_IP" --env "SERVICE_PORT=$SERVICE_PORT"
+u@busybox$ wget -qO- http://$SERVICE_IP:$SERVICE_PORT # Run in the busybox container
+u@busybox$ exit # Exit the busybox container
+
+kubectl delete pod busybox # Clean up the pod we created with "kubectl run"
+
+```
+
+The service definition [exposed the Nginx Service](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) as port 8000 (`$SERVCE_PORT`). We can also access the service from a host running Kubernetes using that port:
+
+```shell
+wget -qO- http://$SERVICE_IP:$SERVICE_PORT # Run on a Kubernetes host
+```
+
+(This works on AWS with Weave.)
+
+To delete the service by name:
+
+```shell
+kubectl delete service nginx-service
+```
+
+When created, each service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the service, and know that communication to the service will be automatically load-balanced out to some Pod that is a member of the set identified by the label selector in the Service.
+
+For more information, see [Services](/docs/concepts/services-networking/service/).
+
+
+## Health Checking
+
+When I write code it never crashes, right? Sadly the [Kubernetes issues list](https://github.com/kubernetes/kubernetes/issues) indicates otherwise...
+
+Rather than trying to write bug-free code, a better approach is to use a management system to perform periodic health checking
+and repair of your application. That way a system outside of your application itself is responsible for monitoring the
+application and taking action to fix it. It's important that the system be outside of the application, since if
+your application fails and the health checking agent is part of your application, it may fail as well and you'll never know.
+In Kubernetes, the health check monitor is the Kubelet agent.
+
+### Process Health Checking
+
+The simplest form of health-checking is just process level health checking. The Kubelet constantly asks the Docker daemon
+if the container process is still running, and if not, the container process is restarted. In all of the Kubernetes examples
+you have run so far, this health checking was actually already enabled. It's on for every single container that runs in
+Kubernetes.
+
+### Application Health Checking
+
+However, in many cases this low-level health checking is insufficient. Consider, for example, the following code:
+
+```go
+lockOne := sync.Mutex{}
+lockTwo := sync.Mutex{}
+
+go func() {
+ lockOne.Lock();
+ lockTwo.Lock();
+ ...
+}()
+
+lockTwo.Lock();
+lockOne.Lock();
+```
+
+This is a classic example of a problem in computer science known as ["Deadlock"](https://en.wikipedia.org/wiki/Deadlock). From Docker's perspective your application is
+still operating and the process is still running, but from your application's perspective your code is locked up and will never respond correctly.
+
+To address this problem, Kubernetes supports user implemented application health-checks. These checks are performed by the
+Kubelet to ensure that your application is operating correctly for a definition of "correctly" that _you_ provide.
+
+Currently, there are three types of application health checks that you can choose from:
+
+ * HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise. See health check examples [here](/docs/user-guide/liveness/).
+ * Container Exec - The Kubelet will execute a command inside your container. If it exits with status 0 it will be considered a success. See health check examples [here](/docs/user-guide/liveness/).
+ * TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can't it is considered a failure.
+
+In all cases, if the Kubelet discovers a failure the container is restarted.
+
+The container health checks are configured in the `livenessProbe` section of your container config. There you can also specify an `initialDelaySeconds` that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.
+
+Here is an example config for a Pod with an HTTP health check
+([pod-with-http-healthcheck.yaml](/examples/pods/probe/pod-with-http-healthcheck.yaml)):
+
+{{< codenew file="pods/probe/pod-with-http-healthcheck.yaml" >}}
+
+And here is an example config for a Pod with a TCP Socket health check
+([pod-with-tcp-socket-healthcheck.yaml](/examples/pods/probe/pod-with-tcp-socket-healthcheck.yaml)):
+
+{{< codenew file="pods/probe/pod-with-tcp-socket-healthcheck.yaml" >}}
+
+For more information about health checking, see [Container Probes](/docs/user-guide/pod-states/#container-probes).
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+For a complete application see the [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/).
+
+{{% /capture %}}
diff --git a/content/en/docs/tutorials/online-training/overview.md b/content/en/docs/tutorials/online-training/overview.md
index f2cfd6c8b..1f1c0a975 100644
--- a/content/en/docs/tutorials/online-training/overview.md
+++ b/content/en/docs/tutorials/online-training/overview.md
@@ -17,6 +17,10 @@ Here are some of the sites that offer online training for Kubernetes:
* [Getting Started with Kubernetes (Pluralsight)](https://www.pluralsight.com/courses/getting-started-kubernetes)
+* [Hands-on Introduction to Kubernetes (Instruqt)](https://play.instruqt.com/public/topics/getting-started-with-kubernetes)
+
+* [Learn Kubernetes using Interactive Hands-on Scenarios (Katacoda)](https://www.katacoda.com/courses/kubernetes/)
+
{{% /capture %}}
diff --git a/content/en/docs/tutorials/services/source-ip.md b/content/en/docs/tutorials/services/source-ip.md
index 01cc9d5db..b436fd617 100644
--- a/content/en/docs/tutorials/services/source-ip.md
+++ b/content/en/docs/tutorials/services/source-ip.md
@@ -111,7 +111,7 @@ If the client pod and server pod are in the same node, the client_address is the
## Source IP for Services with Type=NodePort
-As of Kubernetes 1.5, packets sent to Services with [Type=NodePort](/docs/concepts/services-networking/service/#type-nodeport)
+As of Kubernetes 1.5, packets sent to Services with [Type=NodePort](/docs/concepts/services-networking/service/#nodeport)
are source NAT'd by default. You can test this by creating a `NodePort` Service:
```console
@@ -209,7 +209,7 @@ Visually:
## Source IP for Services with Type=LoadBalancer
-As of Kubernetes 1.5, packets sent to Services with [Type=LoadBalancer](/docs/concepts/services-networking/service/#type-loadbalancer) are
+As of Kubernetes 1.5, packets sent to Services with [Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer) are
source NAT'd by default, because all schedulable Kubernetes nodes in the
`Ready` state are eligible for loadbalanced traffic. So if packets arrive
at a node without an endpoint, the system proxies it to a node *with* an
diff --git a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md
index 629a132f3..490d14ed8 100644
--- a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md
+++ b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md
@@ -78,8 +78,8 @@ Headless Service and StatefulSet defined in `web.yaml`.
```shell
kubectl create -f web.yaml
-service "nginx" created
-statefulset "web" created
+service/nginx created
+statefulset.apps/web created
```
The command above creates two Pods, each running an
@@ -88,8 +88,8 @@ The command above creates two Pods, each running an
```shell
kubectl get service nginx
-NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-nginx None 80/TCP 12s
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+nginx ClusterIP None 80/TCP 12s
kubectl get statefulset web
NAME DESIRED CURRENT AGE
@@ -356,7 +356,7 @@ to 5.
```shell
kubectl scale sts web --replicas=5
-statefulset "web" scaled
+statefulset.apps/web scaled
```
Examine the output of the `kubectl get` command in the first terminal, and wait
@@ -401,7 +401,7 @@ three replicas.
```shell
kubectl patch sts web -p '{"spec":{"replicas":3}}'
-statefulset "web" patched
+statefulset.apps/web patched
```
Wait for `web-4` and `web-3` to transition to Terminating.
@@ -464,7 +464,7 @@ Patch the `web` StatefulSet to apply the `RollingUpdate` update strategy.
```shell
kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}'
-statefulset "web" patched
+statefulset.apps/web patched
```
In one terminal window, patch the `web` StatefulSet to change the container
@@ -472,7 +472,7 @@ image again.
```shell
kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.8"}]'
-statefulset "web" patched
+statefulset.apps/web patched
```
In another terminal, watch the Pods in the StatefulSet.
@@ -549,14 +549,14 @@ Patch the `web` StatefulSet to add a partition to the `updateStrategy` field.
```shell
kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":3}}}}'
-statefulset "web" patched
+statefulset.apps/web patched
```
Patch the StatefulSet again to change the container's image.
```shell
kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"k8s.gcr.io/nginx-slim:0.7"}]'
-statefulset "web" patched
+statefulset.apps/web patched
```
Delete a Pod in the StatefulSet.
@@ -598,7 +598,7 @@ Patch the StatefulSet to decrement the partition.
```shell
kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}'
-statefulset "web" patched
+statefulset.apps/web patched
```
Wait for `web-2` to be Running and Ready.
@@ -673,7 +673,7 @@ The partition is currently set to `2`. Set the partition to `0`.
```shell
kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":0}}}}'
-statefulset "web" patched
+statefulset.apps/web patched
```
Wait for all of the Pods in the StatefulSet to become Running and Ready.
@@ -740,7 +740,7 @@ not delete any of its Pods.
```shell
kubectl delete statefulset web --cascade=false
-statefulset "web" deleted
+statefulset.apps "web" deleted
```
Get the Pods to examine their status.
@@ -784,7 +784,7 @@ an error indicating that the Service already exists.
```shell
kubectl create -f web.yaml
-statefulset "web" created
+statefulset.apps/web created
Error from server (AlreadyExists): error when creating "web.yaml": services "nginx" already exists
```
@@ -844,7 +844,7 @@ In another terminal, delete the StatefulSet again. This time, omit the
```shell
kubectl delete statefulset web
-statefulset "web" deleted
+statefulset.apps "web" deleted
```
Examine the output of the `kubectl get` command running in the first terminal,
and wait for all of the Pods to transition to Terminating.
@@ -884,8 +884,8 @@ Recreate the StatefulSet and Headless Service one more time.
```shell
kubectl create -f web.yaml
-service "nginx" created
-statefulset "web" created
+service/nginx created
+statefulset.apps/web created
```
When all of the StatefulSet's Pods transition to Running and Ready, retrieve
@@ -948,8 +948,8 @@ In another terminal, create the StatefulSet and Service in the manifest.
```shell
kubectl create -f web-parallel.yaml
-service "nginx" created
-statefulset "web" created
+service/nginx created
+statefulset.apps/web created
```
Examine the output of the `kubectl get` command that you executed in the first terminal.
@@ -974,7 +974,7 @@ StatefulSet.
```shell
kubectl scale statefulset/web --replicas=4
-statefulset "web" scaled
+statefulset.apps/web scaled
```
Examine the output of the terminal where the `kubectl get` command is running.
diff --git a/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md
index 7913164c1..acebf69d6 100644
--- a/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md
+++ b/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md
@@ -169,8 +169,8 @@ The following manifest describes a single-instance WordPress Deployment and Serv
The response should be like this:
```
- NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- wordpress 10.0.0.89 80:32406/TCP 4m
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ wordpress ClusterIP 10.0.0.89 80:32406/TCP 4m
```
{{< note >}}**Note:** Minikube can only expose Services through `NodePort`. The EXTERNAL-IP is always pending.{{< /note >}}
diff --git a/content/en/docs/tutorials/stateful-application/zookeeper.md b/content/en/docs/tutorials/stateful-application/zookeeper.md
index d0375a096..5eacf9374 100644
--- a/content/en/docs/tutorials/stateful-application/zookeeper.md
+++ b/content/en/docs/tutorials/stateful-application/zookeeper.md
@@ -676,7 +676,7 @@ kubectl exec zk-0 -- ps -ef
```
The command used as the container's entry point has PID 1, and
-the ZooKeeper process, a child of the entry point, has PID 23.
+the ZooKeeper process, a child of the entry point, has PID 27.
```shell
UID PID PPID C STIME TTY TIME CMD
diff --git a/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md
index 8c6076821..775afd010 100644
--- a/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md
+++ b/content/en/docs/tutorials/stateless-application/expose-external-ip-address.md
@@ -72,8 +72,8 @@ external IP address.
The output is similar to this:
- NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- my-service 10.3.245.137 104.198.205.71 8080/TCP 54s
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ my-service ClusterIP 10.3.245.137 104.198.205.71 8080/TCP 54s
Note: If the external IP address is shown as \, wait for a minute
and enter the same command again.
@@ -108,7 +108,7 @@ external IP address.
addresses of the pods that are running the Hello World application. To
verify these are pod addresses, enter this command:
- kubectl get pods --output=wide
+ kubectl get pods --output=wide
The output is similar to this:
diff --git a/content/en/docs/tutorials/stateless-application/guestbook.md b/content/en/docs/tutorials/stateless-application/guestbook.md
index 2b22a8194..1b65910f2 100644
--- a/content/en/docs/tutorials/stateless-application/guestbook.md
+++ b/content/en/docs/tutorials/stateless-application/guestbook.md
@@ -94,9 +94,9 @@ The guestbook applications needs to communicate to the Redis master to write its
The response should be similar to this:
```shell
- NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- kubernetes 10.0.0.1 443/TCP 1m
- redis-master 10.0.0.151 6379/TCP 8s
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ kubernetes ClusterIP 10.0.0.1 443/TCP 1m
+ redis-master ClusterIP 10.0.0.151 6379/TCP 8s
```
{{< note >}}
@@ -158,10 +158,10 @@ The guestbook application needs to communicate to Redis slaves to read data. To
The response should be similar to this:
```
- NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- kubernetes 10.0.0.1 443/TCP 2m
- redis-master 10.0.0.151 6379/TCP 1m
- redis-slave 10.0.0.223 6379/TCP 6s
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ kubernetes ClusterIP 10.0.0.1 443/TCP 2m
+ redis-master ClusterIP 10.0.0.151 6379/TCP 1m
+ redis-slave ClusterIP 10.0.0.223 6379/TCP 6s
```
## Set up and Expose the Guestbook Frontend
@@ -220,11 +220,11 @@ If you want guests to be able to access your guestbook, you must configure the f
The response should be similar to this:
```
- NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- frontend 10.0.0.112 80:31323/TCP 6s
- kubernetes 10.0.0.1 443/TCP 4m
- redis-master 10.0.0.151 6379/TCP 2m
- redis-slave 10.0.0.223 6379/TCP 1m
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ frontend ClusterIP 10.0.0.112 80:31323/TCP 6s
+ kubernetes ClusterIP 10.0.0.1 443/TCP 4m
+ redis-master ClusterIP 10.0.0.151 6379/TCP 2m
+ redis-slave ClusterIP 10.0.0.223 6379/TCP 1m
```
### Viewing the Frontend Service via `NodePort`
@@ -258,8 +258,8 @@ If you deployed the `frontend-service.yaml` manifest with type: `LoadBalancer` y
The response should be similar to this:
```
- NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- frontend 10.51.242.136 109.197.92.229 80:32372/TCP 1m
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ frontend ClusterIP 10.51.242.136 109.197.92.229 80:32372/TCP 1m
```
1. Copy the external IP address, and load the page in your browser to view your guestbook.
@@ -334,11 +334,11 @@ Deleting the Deployments and Services also deletes any running Pods. Use labels
The responses should be:
```
- deployment "redis-master" deleted
- deployment "redis-slave" deleted
+ deployment.apps "redis-master" deleted
+ deployment.apps "redis-slave" deleted
service "redis-master" deleted
service "redis-slave" deleted
- deployment "frontend" deleted
+ deployment.apps "frontend" deleted
service "frontend" deleted
```
diff --git a/content/en/partners/_index.html b/content/en/partners/_index.html
index 62aa0596f..6bcc3a781 100644
--- a/content/en/partners/_index.html
+++ b/content/en/partners/_index.html
@@ -10,9 +10,9 @@
Kubernetes works with partners to create a strong, vibrant codebase that supports a spectrum of complementary platforms.
-
Kubernetes Certified Service Providers
Vetted service providers with deep experience helping enterprises successfully adopt Kubernetes.
-
Certified Kubernetes Distributions, Hosted Platforms, and Installers
Software conformance ensures that every vendor’s version of Kubernetes supports the required APIs.
-
Kubernetes Training Partners
Vetted training providers who have deep experience in cloud native technology training.
+
Kubernetes Certified Service Providers
Vetted service providers with deep experience helping enterprises successfully adopt Kubernetes.
- Create an Issue
+ Report a problem
{{ end }}
{{ end }}
{{ if not .Params.noedit }}
- Edit this Page
+ Edit on Github
{{ end }}
{{ if not .Params.showcommit }}
@@ -50,5 +48,22 @@
{{ partialCached "footer.html" . }}
{{ partialCached "footer-scripts.html" . }}
+