Skip to content

Commit

Permalink
fix-up 404 urls (kubernetes#17668)
Browse files Browse the repository at this point in the history
  • Loading branch information
tanjunchen authored and Cheng Pan committed Nov 26, 2019
1 parent 2d8a780 commit 2b6e69a
Show file tree
Hide file tree
Showing 15 changed files with 19 additions and 19 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ It is feature complete. All Kubernetes features are supported.

All [CRI validation test](https://github.com/kubernetes/community/blob/master/contributors/devel/cri-validation.md)s have passed. (A CRI validation is a test framework for validating whether a CRI implementation meets all the requirements expected by Kubernetes.)

All regular [node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-node-tests.md)s have passed. (The Kubernetes test framework for testing Kubernetes node level functionalities such as managing pods, mounting volumes etc.)
All regular [node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/e2e-tests.md)s have passed. (The Kubernetes test framework for testing Kubernetes node level functionalities such as managing pods, mounting volumes etc.)

To learn more about the v1.0.0-alpha.0 release, see the [project repository](https://github.com/kubernetes-incubator/cri-containerd/releases/tag/v1.0.0-alpha.0).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Because the feature is alpha in 1.9, it must be explicitly enabled. Alpha featur
### Why Kubernetes CSI?
Kubernetes volume plugins are currently “in-tree”, meaning they’re linked, compiled, built, and shipped with the core kubernetes binaries. Adding support for a new storage system to Kubernetes (a volume plugin) requires checking code into the core Kubernetes repository. But aligning with the Kubernetes release process is painful for many plugin developers.

The existing [Flex Volume plugin](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md) attempted to address this pain by exposing an exec based API for external volume plugins. Although it enables third party storage vendors to write drivers out-of-tree, in order to deploy the third party driver files it requires access to the root filesystem of node and master machines.
The existing [Flex Volume plugin](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md) attempted to address this pain by exposing an exec based API for external volume plugins. Although it enables third party storage vendors to write drivers out-of-tree, in order to deploy the third party driver files it requires access to the root filesystem of node and master machines.

In addition to being difficult to deploy, Flex did not address the pain of plugin dependencies: Volume plugins tend to have many external requirements (on mount and filesystem tools, for example). These dependencies are assumed to be available on the underlying host OS which is often not the case (and installing them requires access to the root filesystem of node machine).

Expand Down Expand Up @@ -215,7 +215,7 @@ CSI drivers are developed and maintained by third-parties. You can find example


### What about Flex?
The [Flex Volume plugin](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md) exists as an exec based mechanism to create “out-of-tree” volume plugins. Although it has some drawbacks (mentioned above), the Flex volume plugin coexists with the new CSI Volume plugin. SIG Storage will continue to maintain the Flex API so that existing third-party Flex drivers (already deployed in production clusters) continue to work. In the future, new volume features will only be added to CSI, not Flex.
The [Flex Volume plugin](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md) exists as an exec based mechanism to create “out-of-tree” volume plugins. Although it has some drawbacks (mentioned above), the Flex volume plugin coexists with the new CSI Volume plugin. SIG Storage will continue to maintain the Flex API so that existing third-party Flex drivers (already deployed in production clusters) continue to work. In the future, new volume features will only be added to CSI, not Flex.


### What will happen to the in-tree volume plugins?
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ CSI drivers are developed and maintained by third parties. You can find a non-de

## What about FlexVolumes?

As mentioned in the [alpha release blog post](https://kubernetes.io/blog/2018/01/introducing-container-storage-interface), [FlexVolume plugin](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md) was an earlier attempt to make the Kubernetes volume plugin system extensible. Although it enables third party storage vendors to write drivers “out-of-tree”, because it is an exec based API, FlexVolumes requires files for third party driver binaries (or scripts) to be copied to a special plugin directory on the root filesystem of every node (and, in some cases, master) machine. This requires a cluster admin to have write access to the host filesystem for each node and some external mechanism to ensure that the driver file is recreated if deleted, just to deploy a volume plugin.
As mentioned in the [alpha release blog post](https://kubernetes.io/blog/2018/01/introducing-container-storage-interface), [FlexVolume plugin](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md) was an earlier attempt to make the Kubernetes volume plugin system extensible. Although it enables third party storage vendors to write drivers “out-of-tree”, because it is an exec based API, FlexVolumes requires files for third party driver binaries (or scripts) to be copied to a special plugin directory on the root filesystem of every node (and, in some cases, master) machine. This requires a cluster admin to have write access to the host filesystem for each node and some external mechanism to ensure that the driver file is recreated if deleted, just to deploy a volume plugin.

In addition to being difficult to deploy, Flex did not address the pain of plugin dependencies: Volume plugins tend to have many external requirements (on mount and filesystem tools, for example). These dependencies are assumed to be available on the underlying host OS, which is often not the case.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Improving performance was one of the major focus items for the containerd 1.1 re

The following results are a comparison between containerd 1.1 and Docker 18.03 CE. The containerd 1.1 integration uses the CRI plugin built into containerd; and the Docker 18.03 CE integration uses the dockershim.

The results were generated using the Kubernetes node performance benchmark, which is part of [Kubernetes node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-node-tests.md). Most of the containerd benchmark data is publicly accessible on the [node performance dashboard](http://node-perf-dash.k8s.io/).
The results were generated using the Kubernetes node performance benchmark, which is part of [Kubernetes node e2e test](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/e2e-tests.md). Most of the containerd benchmark data is publicly accessible on the [node performance dashboard](http://node-perf-dash.k8s.io/).

### Pod Startup Latency
The "105 pod batch startup benchmark" results show that the containerd 1.1 integration has lower pod startup latency than Docker 18.03 CE integration with dockershim (lower is better).
Expand Down
2 changes: 1 addition & 1 deletion content/en/blog/_posts/2018-07-24-cpu-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ There is better performance and less performance variation for both the co-locat

### Performance Isolation for Stand-Alone Workloads

This section shows the performance improvement and isolation provided by the CPU manager for stand-alone real-world workloads. We use two workloads from the [TensorFlow official models](https://github.com/tensorflow/models/tree/master/official): [wide and deep](https://github.com/tensorflow/models/tree/master/official/wide_deep) and [ResNet](https://github.com/tensorflow/models/tree/master/official/resnet). We use the census and CIFAR10 dataset for the wide and deep and ResNet models respectively. In each case the [pods](https://gist.github.com/balajismaniam/941db0d0ec14e2bc93b7dfe04d1f6c58) ([wide and deep](https://gist.github.com/balajismaniam/9953b54dd240ecf085b35ab1bc283f3c), [ResNet](https://gist.github.com/balajismaniam/a1919010fe9081ca37a6e1e7b01f02e3) request 24 CPUs which corresponds to a whole socket worth of cores. As shown in the plots, CPU manager enables better performance isolation in both cases.
This section shows the performance improvement and isolation provided by the CPU manager for stand-alone real-world workloads. We use two workloads from the [TensorFlow official models](https://github.com/tensorflow/models/tree/master/official): [wide and deep](https://github.com/tensorflow/models/tree/master/official/r1/wide_deep) and [ResNet](https://github.com/tensorflow/models/tree/master/official/r1/resnet). We use the census and CIFAR10 dataset for the wide and deep and ResNet models respectively. In each case the [pods](https://gist.github.com/balajismaniam/941db0d0ec14e2bc93b7dfe04d1f6c58) ([wide and deep](https://gist.github.com/balajismaniam/9953b54dd240ecf085b35ab1bc283f3c), [ResNet](https://gist.github.com/balajismaniam/a1919010fe9081ca37a6e1e7b01f02e3) request 24 CPUs which corresponds to a whole socket worth of cores. As shown in the plots, CPU manager enables better performance isolation in both cases.

![performance comparison](/images/blog/2018-07-24-cpu-manager/performance-comparison-2.png)

Expand Down
2 changes: 1 addition & 1 deletion content/en/case-studies/ygrene/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ <h2>Impact</h2>
</div>

<div class="fullcol">
<h2>In less than a decade, <a href="https://ygrene.com/index.html" style="text-decoration:underline">Ygrene</a> has funded more than $1 billion in loans for renewable energy&nbsp;projects.</h2> A <a href="https://www.energy.gov/eere/slsc/property-assessed-clean-energy-programs">PACE</a> (Property Assessed Clean Energy) financing company, "We take the equity in a home or a commercial building, and use it to finance property improvements for anything that saves electricity, produces electricity, saves water, or reduces carbon emissions," says Development Manager Austin Adams. <br><br>
<h2>In less than a decade, <a href="https://ygrene.com/" style="text-decoration:underline">Ygrene</a> has funded more than $1 billion in loans for renewable energy&nbsp;projects.</h2> A <a href="https://www.energy.gov/eere/slsc/property-assessed-clean-energy-programs">PACE</a> (Property Assessed Clean Energy) financing company, "We take the equity in a home or a commercial building, and use it to finance property improvements for anything that saves electricity, produces electricity, saves water, or reduces carbon emissions," says Development Manager Austin Adams. <br><br>
In order to approve those loans, the company processes an enormous amount of underwriting data. "We have tons of different points that we have to validate about the property, about the company, or about the person," Adams says. "So we have lots of data sources that are being aggregated, and we also have lots of systems that need to churn on that data in real time." <br><br>
By 2017, deployments and scalability had become pain points. The company was utilizing massive servers, and "we just reached the limit of being able to scale them vertically," he says. Migrating to AWS Elastic Beanstalk didn’t solve the problem: "The Scala services needed a lot of data from the main Ruby on Rails services and from different vendors, so they were asking for information from our Ruby services at a rate that those services couldn’t handle. We had lots of configuration misses with Elastic Beanstalk as well. It just came to a head, and we realized we had a really unstable system."

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/reference/glossary/flexvolume.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,5 +18,5 @@ tags:
FlexVolumes enable users to write their own drivers and add support for their volumes in Kubernetes. FlexVolume driver binaries and dependencies must be installed on host machines. This requires root access. The Storage SIG suggests implementing a {{< glossary_tooltip text="CSI" term_id="csi" >}} driver if possible since it addresses the limitations with FlexVolumes.

* [FlexVolume in the Kubernetes documentation](/docs/concepts/storage/volumes/#flexvolume)
* [More information on FlexVolumes](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md)
* [More information on FlexVolumes](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md)
* [Volume Plugin FAQ for Storage Vendors](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md)
4 changes: 2 additions & 2 deletions content/en/docs/setup/production-environment/tools/kops.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ kops is an opinionated provisioning system:
* Fully automated installation
* Uses DNS to identify clusters
* Self-healing: everything runs in Auto-Scaling Groups
* Multiple OS support (Debian, Ubuntu 16.04 supported, CentOS & RHEL, Amazon Linux and CoreOS) - see the [images.md](https://github.com/kubernetes/kops/blob/master/docs/images.md)
* High-Availability support - see the [high_availability.md](https://github.com/kubernetes/kops/blob/master/docs/high_availability.md)
* Multiple OS support (Debian, Ubuntu 16.04 supported, CentOS & RHEL, Amazon Linux and CoreOS) - see the [images.md](https://github.com/kubernetes/kops/blob/master/docs/operations/images.md)
* High-Availability support - see the [high_availability.md](https://github.com/kubernetes/kops/blob/master/docs/operations/high_availability.md)
* Can directly provision, or generate terraform manifests - see the [terraform.md](https://github.com/kubernetes/kops/blob/master/docs/terraform.md)

If your opinions differ from these you may prefer to build your own cluster using [kubeadm](/docs/admin/kubeadm/) as
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ To use custom binaries or open source Kubernetes, follow the instructions below.

The source code for [Kubernetes with Alibaba Cloud provider implementation](https://github.com/AliyunContainerService/kubernetes) is open source and available on GitHub.

For more information, see "[Quick deployment of Kubernetes - VPC environment on Alibaba Cloud](https://www.alibabacloud.com/forum/read-830)" in English and [Chinese](https://yq.aliyun.com/articles/66474).
For more information, see "[Quick deployment of Kubernetes - VPC environment on Alibaba Cloud](https://www.alibabacloud.com/forum/read-830)" in English.
Original file line number Diff line number Diff line change
Expand Up @@ -514,7 +514,7 @@ Your main source of help for troubleshooting your Kubernetes cluster should star
Get-NetAdapter | ? Name -Like "vEthernet (Ethernet*"
```
Often it is worthwhile to modify the [InterfaceName](https://github.com/Microsoft/SDN/blob/master/Kubernetes/flannel/l2bridge/start.ps1#L6) parameter of the start.ps1 script, in cases where the host's network adapter isn't "Ethernet". Otherwise, consult the output of the `start-kubelet.ps1` script to see if there are errors during virtual network creation.
Often it is worthwhile to modify the [InterfaceName](https://github.com/microsoft/SDN/blob/master/Kubernetes/flannel/start.ps1#L6) parameter of the start.ps1 script, in cases where the host's network adapter isn't "Ethernet". Otherwise, consult the output of the `start-kubelet.ps1` script to see if there are errors during virtual network creation.
1. My Pods are stuck at "Container Creating" or restarting over and over
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -254,7 +254,7 @@ Users can generate values for the `ControlPlane.KubeadmToken` and `ControlPlane.
1. Install containers and Kubernetes (requires a system reboot)
Use the previously downloaded [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/v1.15.0/KubeCluster.ps1) script to install Kubernetes on the Windows Server container host:
Use the previously downloaded [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) script to install Kubernetes on the Windows Server container host:
```PowerShell
.\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -install
Expand Down Expand Up @@ -284,7 +284,7 @@ Once installation is complete, any of the generated configuration files or binar
#### Join the Windows Node to the Kubernetes cluster
This section covers how to join a [Windows node with Kubernetes installed](#preparing-a-windows-node) with an existing (Linux) control-plane, to form a cluster.
Use the previously downloaded [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/v1.15.0/KubeCluster.ps1) script to join the Windows node to the cluster:
Use the previously downloaded [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) script to join the Windows node to the cluster:
```PowerShell
.\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -join
Expand Down Expand Up @@ -319,7 +319,7 @@ kubectl get nodes
#### Remove the Windows Node from the Kubernetes cluster
In this section we'll cover how to remove a Windows node from a Kubernetes cluster.

Use the previously downloaded [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/v1.15.0/KubeCluster.ps1) script to remove the Windows node from the cluster:
Use the previously downloaded [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) script to remove the Windows node from the cluster:

```PowerShell
.\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -reset
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/setup/release/notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -595,7 +595,7 @@ The main themes of this release are:
- github.com/google/martian: [v2.1.0+incompatible](https://github.com/google/martian/tree/v2.1.0)
- github.com/google/pprof: [3ea8567](https://github.com/google/pprof/tree/3ea8567)
- github.com/google/renameio: [v0.1.0](https://github.com/google/renameio/tree/v0.1.0)
- github.com/googleapis/gax-go/v2: [v2.0.4](https://github.com/googleapis/gax-go/v2/tree/v2.0.4)
- github.com/googleapis/gax-go/v2: [v2.0.4](https://github.com/googleapis/gax-go/tree/v2.0.4)
- github.com/hashicorp/go-syslog: [v1.0.0](https://github.com/hashicorp/go-syslog/tree/v1.0.0)
- github.com/jimstudt/http-authentication: [3eca13d](https://github.com/jimstudt/http-authentication/tree/3eca13d)
- github.com/kisielk/errcheck: [v1.2.0](https://github.com/kisielk/errcheck/tree/v1.2.0)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ If the application is deployed as a Pod in the cluster, please refer to the [nex
To use [Python client](https://github.com/kubernetes-client/python), run the following command: `pip install kubernetes`. See [Python Client Library page](https://github.com/kubernetes-client/python) for more installation options.

The Python client can use the same [kubeconfig file](/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/)
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes-client/python/tree/master/examples/example1.py).
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes-client/python/tree/master/examples).

### Other languages

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ The Corefile configuration includes the following [plugins](https://coredns.io/p
> The `pods insecure` option is provided for backward compatibility with kube-dns. You can use the `pods verified` option, which returns an A record only if there exists a pod in same namespace with matching IP. The `pods disabled` option can be used if you don't use pod records.

* [prometheus](https://coredns.io/plugins/prometheus/): Metrics of CoreDNS are available at http://localhost:9153/metrics in [Prometheus](https://prometheus.io/) format.
* [prometheus](https://coredns.io/plugins/metrics/): Metrics of CoreDNS are available at http://localhost:9153/metrics in [Prometheus](https://prometheus.io/) format.
* [forward](https://coredns.io/plugins/forward/): Any queries that are not within the cluster domain of Kubernetes will be forwarded to predefined resolvers (/etc/resolv.conf).
* [cache](https://coredns.io/plugins/cache/): This enables a frontend cache.
* [loop](https://coredns.io/plugins/loop/): Detects simple forwarding loops and halts the CoreDNS process if a loop is found.
Expand Down
Loading

0 comments on commit 2b6e69a

Please sign in to comment.